Last Update 1:13 PM December 01, 2024 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Friday, 29. November 2024

Extrimian

Decentralized Identity in Education

Decentralized Identity in Education: Revolutionizing Data Management with Extrimian The education sector is undergoing a significant transformation with the integration of decentralized digital identity and Data technologies. These innovations are set to redefine the management, access, and trust of personal and institutional data across various educational processes. Extrimian, with its advanced
Decentralized Identity in Education: Revolutionizing Data Management with Extrimian

The education sector is undergoing a significant transformation with the integration of decentralized digital identity and Data technologies. These innovations are set to redefine the management, access, and trust of personal and institutional data across various educational processes. Extrimian, with its advanced suite of services, is leading this change, enhancing security, privacy, and efficiency for educational institutions globally.

The Urgent Need for Decentralized Identity Solutions

Educational institutions manage a vast array of sensitive data, from student records to faculty credentials. Traditional data management methods are often plagued by security breaches, privacy issues, and bureaucratic inefficiencies. Decentralized identity technology provides a groundbreaking solution by enabling secure, sovereign control over digital identities and data.

The Impact on Educational Data Management

Extrimian’s Self-Sovereign Identity (SSI) systems allow institutions to issue, manage, and verify credentials seamlessly and securely, streamlining administrative operations and minimizing data mismanagement risks.

Advantages of Implementing Decentralized Data Management in Education: Robust Data Privacy and Security: Extrimian’s blockchain-based solutions ensure that educational credentials and data are stored securely and immutably, safeguarding against unauthorized access and manipulation. Operational Efficiency and Cost Reduction: By eliminating intermediaries in the credential verification process, decentralized systems foster faster processing times and lower operational costs. Institutions can directly verify the authenticity of credentials through Extrimian’s platform, enhancing operational efficiency. Seamless Interoperability: Designed for cross-institutional and international interoperability, Extrimian’s data solutions facilitate the reliable exchange of credentials across different educational systems, enhancing global education programs and student mobility. Empowerment of Stakeholders: Students and faculty gain control over their digital credentials, managing and sharing their data independently, which reinforces the principles of data sovereignty and user-centric digital experiences.

Try our Education Demo to understand how this technology can work in this industry: click in this link.

Exemplary Implementation: A Case Study

A partnership with a leading university showcased the benefits of Extrimian’s DID technology by simplifying credential issuance and verification processes, thereby maintaining accurate, tamper-proof academic records. For more on this, visit our Case Studies page.

Future Prospects in Educational Technology

The potential for DID in education extends to streamlining administrative processes, enhancing personalized learning experiences, and integrating smart contracts for better governance. Extrimian is at the forefront, developing tools that integrate these technologies into educational frameworks.

For broader insights into decentralized digital identities in education and other sectors, the Decentralized Identity Foundation provides extensive resources and research.

Source: https://www.linkedin.com/pulse/self-sovereign-identity-distributed-ledger-blockchain-eray-altili/

Conclusion: Setting the Standard with Extrimian

The integration of QuarkID into the Buenos Aires miBA platform exemplifies a strategic enhancement of the city’s digital infrastructure, setting a global benchmark for digital governance and identity management.

For a detailed understanding of decentralized digital identity movements and how Extrimian’s solutions are pivotal, visit our Use Cases page. Find more insights and potential collaborations on Extrimian website.

The post Decentralized Identity in Education first appeared on Extrimian.


FindBiometrics

AI Update: It’s So Important to Hire the Right People

Welcome to the newest edition of ID Tech’s AI update. Here’s the latest big news on the shifting landscape of AI and identity technology: SoftBank is planning to take a […]
Welcome to the newest edition of ID Tech’s AI update. Here’s the latest big news on the shifting landscape of AI and identity technology: SoftBank is planning to take a […]

ID Tech Digest – November 29, 2024

Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: European Commission Sets Standards for Cross-Border […]
Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: European Commission Sets Standards for Cross-Border […]

iDenfy Opens Delaware Office to Strengthen US Presence

iDenfy, a Lithuania-based provider of compliance-focused technology solutions, has opened a new office in Delaware as part of its efforts to expand its operations in the United States. The company, […]
iDenfy, a Lithuania-based provider of compliance-focused technology solutions, has opened a new office in Delaware as part of its efforts to expand its operations in the United States. The company, […]

Revolut to Launch Face-scanning ATMs in 2025

Revolut, a London-based fintech unicorn, has emerged as a significant player in the digital banking landscape since its inception nearly a decade ago. Founded by Nik Storonsky and Vlad Yatsenko, […]
Revolut, a London-based fintech unicorn, has emerged as a significant player in the digital banking landscape since its inception nearly a decade ago. Founded by Nik Storonsky and Vlad Yatsenko, […]

New Zealand Consumers Show Support for Facial Recognition in Retail

New research commissioned by Foodstuffs North Island (FSNI) reveals a positive attitude among New Zealand consumers toward the use of facial recognition (FR) in retail settings, despite potential privacy concerns. […]
New research commissioned by Foodstuffs North Island (FSNI) reveals a positive attitude among New Zealand consumers toward the use of facial recognition (FR) in retail settings, despite potential privacy concerns. […]

Istanbul to Require FRT for Liquor, Cigarette Retailers

The Istanbul government has issued a new mandate requiring liquor stores to install security cameras by January 1, 2025, marking a significant development in the city’s approach to regulating alcohol […]
The Istanbul government has issued a new mandate requiring liquor stores to install security cameras by January 1, 2025, marking a significant development in the city’s approach to regulating alcohol […]

SC Media - Identity and Access

High severity RCE flaws among several newly addressed IBM bugs

Fixes have been released by IBM to address numerous product vulnerabilities, the most serious of which are a pair of high-severity remote code execution bugs in its Data Visualization Manager and Security SOAR offerings, reports SecurityWeek.

Fixes have been released by IBM to address numerous product vulnerabilities, the most serious of which are a pair of high-severity remote code execution bugs in its Data Visualization Manager and Security SOAR offerings, reports SecurityWeek.


Widespread WordPress compromise possible with critical plugin flaws

Nearly 50% of over 200,000 WordPress sites with the Spam protection, Anti-Spam, FireWall by CleanTalk plugin were discovered to remain impacted by a pair of critical authorization bypass vulnerabilities, tracked as CVE-2024-10542 and CVE-2024-10781, which could be leveraged to facilitate arbitrary plugin activation and remote code execution attacks, SecurityWeek reports.

Nearly 50% of over 200,000 WordPress sites with the Spam protection, Anti-Spam, FireWall by CleanTalk plugin were discovered to remain impacted by a pair of critical authorization bypass vulnerabilities, tracked as CVE-2024-10542 and CVE-2024-10781, which could be leveraged to facilitate arbitrary plugin activation and remote code execution attacks, SecurityWeek reports.


Cybercriminals target Black Friday shoppers with AI-made fake online stores

SiliconAngle reports that a Netcraft study has revealed an increase in the use of artificial intelligence large language models in crafting fake online stores and deceptive content during the Black Friday shopping period.

SiliconAngle reports that a Netcraft study has revealed an increase in the use of artificial intelligence large language models in crafting fake online stores and deceptive content during the Black Friday shopping period.


Critical Array Networks flaw added to CISA vulnerabilities catalog

Active intrusions involving a critical web security flaw impacting Array Networks AG and vxAG secure access gateways have resulted in the bug's inclusion into the Cybersecurity and Infrastructure Security Agency's Known Exploited Vulnerabilities catalog, with federal agencies recommended to remediate the issue by Dec. 16, according to The Hacker News.

Active intrusions involving a critical web security flaw impacting Array Networks AG and vxAG secure access gateways have resulted in the bug's inclusion into the Cybersecurity and Infrastructure Security Agency's Known Exploited Vulnerabilities catalog, with federal agencies recommended to remediate the issue by Dec. 16, according to The Hacker News.


Major law enforcement operation clamps down on African cybercrime

More than 134,000 cybercrime networks and infrastructures across Africa have been taken down by Interpol and Afripol as part of the massive Operation Serengeti effort, which also resulted in the arrest of 1,006 suspected cybercriminals believed to have caused global financial losses exceeding $190 million, reports The Record, a news site by cybersecurity firm Recorded Future.

More than 134,000 cybercrime networks and infrastructures across Africa have been taken down by Interpol and Afripol as part of the massive Operation Serengeti effort, which also resulted in the arrest of 1,006 suspected cybercriminals believed to have caused global financial losses exceeding $190 million, reports The Record, a news site by cybersecurity firm Recorded Future.


FindBiometrics

CJEU Rules Against Bulgaria’s Biometric Data Collection Practices

The Court of Justice of the European Union (CJEU) has delivered a pivotal judgment against Bulgaria’s biometric data collection practices, emphasizing the necessity for authorities to justify such data collection […]
The Court of Justice of the European Union (CJEU) has delivered a pivotal judgment against Bulgaria’s biometric data collection practices, emphasizing the necessity for authorities to justify such data collection […]

Thales Group

Accelerating Talios with onboard AI

Accelerating Talios with onboard AI Language English eugenio.koudouovoh Fri, 11/29/2024 - 12:52 Boosted by artificial intelligence, the Rafale's Talios laser designation pod detects objects of interest faster than ever before. By automating airborne imagery analysis, it provides invaluable assistance to pilots without diminishing their vital role in the d
Accelerating Talios with onboard AI Language English eugenio.koudouovoh Fri, 11/29/2024 - 12:52

Boosted by artificial intelligence, the Rafale's Talios laser designation pod detects objects of interest faster than ever before. By automating airborne imagery analysis, it provides invaluable assistance to pilots without diminishing their vital role in the decision-making process. This on-board, real-time, trusted AI illustrates the strategic vision clearly set out by Patrice Caine at the Capital Markets Day.

This latest innovation is one of the most important developments so far by cortAIx, the artificial intelligence accelerator set up by Thales to leverage the Group's extensive AI expertise and build ground-breaking solutions for the armed forces, aircraft manufacturers and other operators of critical systems. Since it entered service at the end of 2018, the Talios reconnaissance and targeting pod has steadily added new functions to enhance operational value. And starting with the Rafale standard F4.3, its deep learning algorithms will be capable of searching for objects of interest in a given zone 100 times faster.

 

Onboard image analysis in real time

In a nutshell, the AI is installed in the pod that scans the zone, automatically analysing the images captured and telling the pilot what it's detected. By pre-selecting objects of interest, it reduces the cognitive load on the pilot, but the decision to engage a target remains the pilot's responsibility at all times. Importantly, because the AI is installed in the pod itself – despite physical challenges related to temperature, vibration and energy consumption – it provides information in real time as the mission unfolds. This overcomes the need for a datalink to send imagery to a ground station, because everything happens on board the aircraft. In addition, this AI is capable of spotting small objects in the images, enabling the pilot to remain at a safe stand-off distance from any potential threats. All of which speeds up the tempo of operations because the pilot is kept informed about the tactical situation in real time.

 

Co-engineered with operational personnel

The AI developed for the Talios pod draws on several years of R&D and was trained on vast numbers of examples sourced from sovereign imagery databases compiled by Thales during test flights or provided by the French armed forces. In addition, thanks to an in-house innovation lab called Image'Inn, operational personnel were placed in realistic situations to test different scenarios and fine-tune the user interface. Thanks to this co-engineering approach with end users, the new function has been developed well ahead of operational deployment.

 

Sights set on 2026

And operational deployment could soon be a reality. Thales first presented the potential of the AI to the French armed forces in 2018, and a contract was awarded in December 2023. The new AI function is expected to enter service in 2026 with the arrival of the Rafale standard F4.3, and will be the first function on board the Rafale to make such intensive use of deep learning technologies.

 

Towards collaborative combat

The longer-term significance of the AI installed in the pod will truly be felt in the era of collaborative combat, which will rely on data exchanges between the different assets deployed in the theatre. Given the huge volumes of data generated by all the sensors, the AI will play an essential role by only extracting and transmitting relevant information, overcoming the danger of saturating communication systems.

 

 Michel BLANQUART, Director of the Emports Optronics Department, presented the functionalities of the TALIOS pod to investors and the media.

29 Nov 2024 Defence and Security Digital Identity and Security Accelerating Talios with onboard AI Type News Hide from search engines Off

Dock

Data Reconciliation: An Introductory Guide

In a data-driven world, maintaining accurate and consistent information is essential for business success. Data reconciliation plays a crucial role in ensuring that data from various sources aligns and matches to provide a reliable foundation for decision-making, reporting, and operational efficiency.  This blog will delve into what

In a data-driven world, maintaining accurate and consistent information is essential for business success. Data reconciliation plays a crucial role in ensuring that data from various sources aligns and matches to provide a reliable foundation for decision-making, reporting, and operational efficiency. 

This blog will delve into what data reconciliation is, why it is vital for businesses, and best practices for implementing it effectively.

Full Article:


Customer Data Matching: What is it and why is it important?

In today’s data-centric world, businesses need to maintain accurate and consistent records to thrive. This is where customer data matching becomes essential. Customer data matching is the process of identifying and linking records that represent the same customer across multiple data sources. This ensures that businesses

In today’s data-centric world, businesses need to maintain accurate and consistent records to thrive. This is where customer data matching becomes essential. Customer data matching is the process of identifying and linking records that represent the same customer across multiple data sources. This ensures that businesses have a single, unified view of their customers, which is crucial for making informed decisions, providing personalized experiences, and maintaining operational efficiency.

Customer data matching goes beyond simply managing data; it is about creating reliable, connected information that helps businesses operate seamlessly. When executed effectively, data matching reduces redundancy, enhances data accuracy, and improves the overall customer experience.

Full Article: www.dock.io/post/customer-data-matching


Entity Resolution: What is it and why is it important?

In a world increasingly driven by data, ensuring that information across systems is accurate and connected is essential for business success. This is where entity resolution comes into play. Entity resolution is the process of identifying and linking data records that refer to the same real-world entity, such

In a world increasingly driven by data, ensuring that information across systems is accurate and connected is essential for business success. This is where entity resolution comes into play. Entity resolution is the process of identifying and linking data records that refer to the same real-world entity, such as a customer or organization, even when those records contain variations or errors. 

Whether you’re in finance, healthcare, or retail, entity resolution helps businesses consolidate their data, reduce duplications, and achieve a single, unified view of each customer, product, or transaction.

Entity resolution is more than just a backend process—it’s a foundational part of managing customer relationships, improving operational efficiency, and reducing risk. Without a clear view of their data, businesses face challenges like data silos, high operational costs, and inconsistent customer experiences.

Full article: https://www.dock.io/post/entity-resolution


UNISOT

Redefining Sustainability: The End of Greenwashing

In their insightful articles, Professors Arne Nygaard and Ragnhild Silkoset highlight the pervasive issue of greenwashing, where companies falsely present products as environmentally friendly, thereby eroding consumer trust and undermining genuine sustainability efforts. The post Redefining Sustainability: The End of Greenwashing appeared first on UNISOT.
Redefining Sustainability: The End of Greenwashing

 

In their insightful articles (here and here), Professors Arne Nygaard and Ragnhild Silkoset highlight the pervasive issue of greenwashing, where companies falsely present products as environmentally friendly, thereby eroding consumer trust and undermining genuine sustainability efforts.  

They eloquently argue that blockchain technology is the key to tackling greenwashing by ensuring that sustainability claims are traceable, transparent and verifiable. At UNISOT, we see blockchain not just as a technical solution but as a foundational element in redefining trust in global supply chains. Our Digital Product Passports (DPP), powered by Enterprise Blockchain technology, provide an immutable and transparent record of a product’s entire lifecycle. This aligns with Nygaard and Silkoset’s emphasis on the need for reliable, traceable, and tamper-proof product information to mitigate greenwashing.

UNISOT’s Solutions in Action  Full Transparency Across the Supply Chain

Nygaard and Silkoset stress the importance of transparency, emphasizing that consumers and regulators demand trustworthy documentation of environmental claims. UNISOT’s blockchain-based platform ensures that data entered at every step of the supply chain -whether it’s sourcing raw materials, manufacturing or distribution – is immutably stored and easily accessible. 

Example:
A clothing manufacturer can prove its fabrics are made from 100% recycled materials, with verifiable data on sourcing, energy use and CO₂ emissions. This aligns with the authors’ call for “proof, not promises.” 

 

Trust Through Decentralized Verification

The authors criticize the traditional reliance on centralized certification bodies, which can be susceptible to errors, biases or even corruption. Blockchain decentralizes this process, enabling all participants in the supply chain to input, audit and access data independently. 

UNISOT’s Impact:
Smart Digital Twins provide a decentralized, interactive record of each product, ensuring that sustainability claims are verified by multiple stakeholders – not just the company making the claim. 

Consumer Empowerment Through Digital Product Passports

Nygaard and Silkoset highlight how blockchain can empower consumers by giving them easy access to verified product data. UNISOT’s DPPs make this a reality. By scanning a QR code or NFC tag, consumers can instantly access a product’s history, including: 

Carbon footprint  Ethical Sourcing Certifications  Compliance with environmental standards 

Example:
A customer buying sustainable seafood can verify that the fish was ethically farmed, transported with minimal emissions, and complies with regulatory standards like the EU Digital Product Passport mandate. 

Combating Greenwashing with Immutable Data

A key point raised by Nygaard and Silkoset is the role of blockchain in preventing greenwashing by ensuring data integrity. Companies can no longer alter or selectively report data to present a false image of sustainability. 

UNISOT’s Solution:
Our Enterprise Blockchain backbone ensures that every claim – whether about emissions, sourcing or recycling – is recorded in a tamper-proof manner. Any discrepancy between what is claimed and what the data shows is immediately apparent. 

Real-Time Monitoring and Reporting

Nygaard and Silkoset emphasize the importance of real-time tracking to provide accurate and up-to-date information. UNISOT’s integration with IoT devices ensures that real-time data on energy use, emissions and material flows is continuously recorded and accessible. 

“Blockchain technology offers a more robust solution against the risk of greenwashing than traditional trademarks and certification systems.” – Professors Arne Nygaard and Ragnhild Silkoset

Why This Matters 

Nygaard and Silkoset argue that trust is the cornerstone of a sustainable economy and that companies who invest in transparency and verifiability stand to gain a competitive advantage. With upcoming regulatory changes, such as the EU’s Digital Product Passport and increasing consumer demands for transparency, companies that fail to act risk not only fines but also irreparable damage to their reputation. 

 

UNISOT: Turning Vision Into Reality 

By adopting UNISOT’s solutions, companies can move beyond vague sustainability claims and embrace a future where every claim is backed by immutable data, every product has a story, and every consumer has the power to make informed decisions. 

The battle against greenwashing isn’t just about avoiding bad press; it’s about building a better, more sustainable world. As Professors Nygaard and Silkoset highlight, blockchain technology is the solution to unethical practices, and UNISOT is here to lead the way. 

Reach out to us to explore how UNISOT’s solutions can transform your business into a beacon of trust and transparency. 

Read more: Addressing Greenwashing: Building Trust Through Transparency with UNISOT Solutions

The post Redefining Sustainability: The End of Greenwashing appeared first on UNISOT.


BlueSky

The Engagement Is Better on Bluesky

Bluesky is the lobby to the open web. Find and build your community here.

We could go on about how we welcome publishers, we don't demote links, we encourage independent developers to build apps and extensions on top of Bluesky's network.... but instead, we'll show you:

The Boston Globe

Traffic from Bluesky to @bostonglobe.com is already 3x that of Threads, and we are seeing 4.5x the conversions to paying digital subscribers.

— Matt Karolian (@mkarolian.bsky.social) November 26, 2024 at 10:19 AM
The Guardian

By which I mean, I'm pretty sure traffic from @bsky.app to @theguardian.com is *significantly* higher than the very obvious 2x that of Threads

This post brought to you by a reply to @mkarolian.bsky.social on Threads, where it has had just 105 engagements, as opposed to the 18k+ here

[image or embed]

— Dave Earley (@earleyedition.bsky.social) November 26, 2024 at 10:30 PM

The New York Times

hard to exaggerate how nuts the engagement is on Bluesky compared to 𝕏. a vastly smaller user base (at least officially), but just look at these stats for one of the biggest newspapers on Earth. Musk has absolutely trashed the platform. folks, you are not locked in on 𝕏. not even a little.

[image or embed]

— Kevin Rothrock (@kevinrothrock.me) November 23, 2024 at 1:21 AM
Open-source Web Dev

We have 6% of the followers here compared to the 100k in X. The vite 6.0 announcement in bluesky already got half the reposts and a third of the likes. And most of the comments and quotes from OSS maintainers happened here. I don't know about other communities, but OSS web dev is a bluesky game now.

[image or embed]

— patak (@patak.dev) November 27, 2024 at 8:01 AM
Democracy Docket

Traffic from Bluesky to @democracydocket.com is surging while X is falling and Threads remains largely irrelevant. This is powering rapid growth of both free subscribers and paid members.

— Marc Elias (@marcelias.bsky.social) November 27, 2024 at 5:31 AM

Join us: bsky.app/download. Publishers, you can find our press FAQ here.

Thursday, 28. November 2024

FindBiometrics

European Commission Sets Standards for Cross-Border Digital Identity Wallets

The European Commission has taken a significant step forward by adopting technical standards for cross-border European Digital Identity Wallets. The move aims to establish a unified digital identity system across […]
The European Commission has taken a significant step forward by adopting technical standards for cross-border European Digital Identity Wallets. The move aims to establish a unified digital identity system across […]

ID Tech Digest – November 28, 2024

Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Australia Passes Social Media Ban for […]
Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Australia Passes Social Media Ban for […]

Australia Passes Social Media Ban for Kids

Australia has now officially passed legislation that bans children under 16 from accessing major social media platforms, marking a global first in social media regulation. The new law, passed by […]
Australia has now officially passed legislation that bans children under 16 from accessing major social media platforms, marking a global first in social media regulation. The new law, passed by […]

auth0

Empower Your Enterprise Customers to Set up Their Own SSO Implementations

Self-Service Single Sign-On (SSO) reaches General Availability (GA) status
Self-Service Single Sign-On (SSO) reaches General Availability (GA) status

KuppingerCole

Security in the Era of Rapid Digitalization in Operational Technology Environments

by John Tolbert The Rapid Digitalization of OT, ICS, and IoT: Opportunities and Security Risks In many enterprises, Industrial Control Systems (ICS) and Operational Technology (OT) systems were kept isolated from IT environments, both logically and physically. ICS is generally considered a subset of OT. Internet of Things (IoT) devices, however, were designed to be networked, enabling real-time

by John Tolbert

The Rapid Digitalization of OT, ICS, and IoT: Opportunities and Security Risks

In many enterprises, Industrial Control Systems (ICS) and Operational Technology (OT) systems were kept isolated from IT environments, both logically and physically. ICS is generally considered a subset of OT. Internet of Things (IoT) devices, however, were designed to be networked, enabling real-time or latent data transmissions to applications to generate insights and to provide remote control capabilities. The connectivity of OT, ICS, and IoT systems to the cloud or corporate networks has increased across many industries, from manufacturing and pharmaceuticals to oil and gas and aerospace.

While this sea change in network connectivity and access policies offers benefits such as predictive maintenance, asset optimization, and enhanced productivity, it also dramatically expands the attack surface. Cyber attacks targeting OT, ICS, and IoT systems are no longer hypothetical. Attacks can have direct and severe consequences, from halting production lines to causing environmental damage or even endangering lives. Cybersecurity in OT and ICS needs to address unique constraints, including:

Reliability requirements: OT systems often operate on very strict schedules and have uptime requirements that cannot be violated without severe safety and financial consequences. Security updates or patches need to be carefully planned to avoid disrupting critical processes. Allowing real-time patch updates directly from vendors is most often prohibited. Legacy systems: Many OT systems run on outdated hardware and software that lack the capacity or compatibility for modern security solutions. This means performing timely security updates may be practically impossible. Other measures may be needed to protect and contain such legacy systems. Physical access: In industrial environments, devices and sensors may be spread across large areas or even in distant, isolated locations, which makies physical security, network segmentation, and secure remote access imperative.

To address these limitations, security strategies must be adapted to meet the unique requirements of OT/ICS/IoT, with specific attention paid to the differences from standard IT infrastructure.

Key Differences in Securing OT, ICS, and IoT Environments

Despite the overlap between IT and OT, the two are distinct in their technological requirements, operational demands, and security challenges.

IT Security: IT security solutions rely on a variety of well-established tool types like firewalls, EPDR, SIEM, SOAR, and IAM to safeguard data integrity and confidentiality. IT environments tend to prioritize agility, which allows more frequent software updates and rapid deployment of new security measures without significant downtime. IT tools use common protocols like HTTPS, SMTP, SMB, LDAP, etc., and for which many security solutions already exist. IoT Security: IoT devices can be found in a wide range of environments, from smart homes to warehouses to industrial facilities. They are often resource-constrained, with limited processing power and memory, which can restrict the type and complexity of security protocols they can support. Many IoT devices were simply not designed with security in mind. IoT devices use protocols such as CoAP, MQTT, and XMPP, which are less common in traditional IT and thus, IT security tools are less likely to have out-of-the-box support for. Vendor-provided or third-party IoT security solutions generally focus on ensuring data integrity, communication confidentiality over IoT protocols, protecting against device spoofing, and managing device identities and access. OT/ICS Security: OT/ICS systems are generally custom-engineered for specific applications, often running on special or proprietary protocols like ModBus, DNP3, OPC-UA, and S7. Security in OT/ICS environments focuses on maintaining uptime, integrity, and safe operations, with stringent requirements to avoid disruptions. Certain OT protocols lack built-in support for encryption or authentication, requiring additional protective measures. In most ICS and Critical Infrastructure Systems (CIS), safety of workers and the surroundings takes precedence over even computing security.

Security strategies for IT and OT therefore need to account for these differences. IT security focuses more on malware prevention via endpoint protection detection and response (EPDR), identity and access management (IAM), and network segmentation, while OT security demands robust intrusion detection, continuous monitoring, and a deep understanding of OT/ICS/IoT protocols to detect anomalous or malicious behavior.

Navigating the Divide Between OT Engineering and IT Software Engineering

Another difficulty in securing OT, ICS, and IoT environments is the difference in worldviews between OT engineers and IT software engineers. OT engineers prioritize reliability and safety, because failures in OT environments can have immediate and severe consequences. Conversely, software developers tend to prioritize rapid innovation and adding functionality, which can be a higher priority for their IT customers.

This culture clash can lead to friction in implementing security measures for IT and OT systems. Some challenges include:

Risk tolerance: OT engineers have a low tolerance for change and untested solutions, while IT software developers are accustomed to coding and testing new technologies frequently to keep up with IT customers’ demands. Update and patch schedules: Software vendors may push regular software updates to deploy new features and security patches, whereas OT engineers have to schedule patches and updates comparatively infrequently, perhaps just one or two times per year, and see these as potential disruptions to uptime or performance.

Organizations can take two different approaches here. The first is to leverage IT security systems where they make sense: deploying EPDR agents where permitted by OT vendors, using OT/ICS/IoT-aware Network Detection and Response (NDR) solutions to find and stop malicious actors, using SIEM and SOAR systems for collection and analysis of all telemetry and additional response actions. The other approach is to implement dedicated OT/ICS security solutions. These OT/ICS security solutions can cover additional functions such as asset discovery and classification, scanning of USB devices (used for updating firmware) for malware, as well as monitoring and anomaly detection, and are designed to work in these environments with HMIs, PLCs, SCADA, and IoT devices.

Keeping Pace with Digitalization: Adaptive Security Strategies

As OT digitalization accelerates, security measures need to be agile, capable of adapting to emerging threats, and proactive in addressing potential vulnerabilities. Key strategies include:

Zero Trust Architecture: Zero Trust models work on the principle of “never trust, always verify,” ensuring that every request for access is authenticated and authorized. This approach reduces lateral movement in networks, limiting the scope of damage if a device is compromised. Zero Trust Network Access is particularly important for securing remote access by vendors and contractors into OT/ICS networks. Network Segmentation: Network segmentation divides the network into isolated segments or zones. In OT, this means separating different parts of the production floor or critical control systems from non-critical zones, thus limiting the exposure of sensitive systems to potential threats. OT centric security solutions are often designed to enforce separation in accordance with the Purdue Enterprise Reference Architecture. Behavioral Analytics and Anomaly Detection: IT and OT systems can benefit from anomaly detection tools that learn regular patterns of behavior and trigger alerts when unusual activity occurs. Since some OT components lack basic security features like authentication, monitoring for deviations in traffic and user behavior can help detect and contain potential threats before they escalate. Conclusion

Securing OT/ICS/IoT environments in the current era of rapid digitalization is a multifaceted challenge that requires a tailored approach for each organization. Join us at cyberevolution in Frankfurt, Germany on 3-5 December to hear more about OT and ICS security.


Ocean Protocol

2024 Mexican Grand Prix: Formula 1 Prediction Challenge Results

Introduction The Formula 1 Prediction Challenge: 2024 Mexican Grand Prix brought together data scientists to tackle one of the most dynamic aspects of racing — pit stop strategies. Participants used historical data from past Mexican Grand Prix events and insights from the 2024 F1 season to create machine-learning models capable of predicting key race elements. With every second on the track criti
Introduction

The Formula 1 Prediction Challenge: 2024 Mexican Grand Prix brought together data scientists to tackle one of the most dynamic aspects of racing — pit stop strategies. Participants used historical data from past Mexican Grand Prix events and insights from the 2024 F1 season to create machine-learning models capable of predicting key race elements. With every second on the track critical, the challenge showcased how data can shape decisions that define race outcomes.

The challenge focused on predicting four essential components of pit stop strategies: the number of stints, tire compound choices, laps per stint, and average lap times. Using innovative approaches and advanced algorithms, participants modeled scenarios accounting for starting grid positions, driver performance, and unpredictable race conditions like weather changes or mid-race interruptions. The goal was to provide actionable insights for teams navigating the complexities of modern Formula 1 strategy.

The challenge demonstrated the intersection of sports and data science by combining real-world datasets with predictive modeling. It highlighted the importance of adaptability and precision as models needed to handle variations in track conditions, driver strategies, and car performance. This competition emphasized leveraging analytics in one of the world’s fastest and most data-intensive sports.

2024 Mexico GP Prediction Challenge — Top 10 Podium 1st Place: Aleksandr Lazutin [Poland]

Aleks used three Random Forest Regression models to predict stints, tire life, average lap time, and a Random Forest Classifier to predict tire compounds. The model incorporated predictions for individual drivers and the entire grid, offering flexibility in application for race strategy. By organizing predictions into a modular structure, Aleksandr ensured each component could function independently while supporting the broader model.

He accounted for driver and team variability by including performance metrics, historical data, and team-specific strategies. The model incorporated external factors like weather and mid-race incidents, ensuring it adapted to dynamic race conditions. Outputs provided detailed stint breakdowns and timelines to support decision-making.

Aleks ensured the model could be implemented without complications by delivering structured outputs and comprehensive documentation. This design enabled the evaluation team to apply the model efficiently, ensuring its top ranking in the challenge.

2nd Place: Yuichiro “Firepig” [Japan]

Firepig created a three-step model that used decision trees, linear regression, and random forests to predict tire strategies, laps per stint, and average lap times. The model started with tire compound predictions, followed by stint and lap estimates, and ended with lap time calculations. Firepig’s approach allowed it to adapt to changing race conditions, such as weather or race interruptions.

The model integrated data like grid position, tire compounds, and driver performance to align predictions with real-world racing strategies. Firepig included options for mid-race updates by allowing inputs like current laps, stint numbers, and weather conditions. This structure ensured the model could adjust to unpredictable scenarios during the race.

Firepig refined predictions using detailed feature engineering and cross-validation. The model secured second place in the competition by designing a tool that handled race variability and provided practical outputs.

3rd Place: Yunus Gümüşsoy [Türkiye]

Yunus’ model combined XGBoost, LightGBM, and CatBoost to predict stints, tire compounds, laps per stint, and average lap times. Yunus focused on building a robust data pipeline, merging historical and current-season data to create a comprehensive dataset. The model incorporated track-specific factors like altitude and straights to align predictions with the unique demands of the Mexican Grand Prix.

He integrated weather data, driver inputs, and car performance metrics to handle dynamic race scenarios. This adaptability allowed the model to remain effective under varying conditions. The implementation included clear input guidelines and outputs designed for practical use in race-day strategy planning.

Yunus secured third place by delivering a flexible, well-documented solution that bridged data science and Formula 1 strategy. His focus on track-specific insights and comprehensive data preparation set the model apart.

2024 Championship

Our challenges offer prize pools from $10,000 to $20,000, distributed among the top 10 participants. Our points system for the championship allocates between 100 and 200 points to the top 10 finishers in each challenge, with each point valued at $100. Participants accumulate these points toward the 2024 Championship. Last year, the top 10 champions received an additional $10 for each point they had earned.

Current 2024 Championship Standings

Additionally, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists maintain their intellectual property rights while we provide support in monetizing their innovations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

2024 Mexican Grand Prix: Formula 1 Prediction Challenge Results was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


DF117 Completes and DF118 Launches

Predictoor DF117 rewards available. DF118 runs Nov 28 — Dec 5th, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 117 (DF117) has completed. DF118 is live today, Nov 28. It concludes on December 5th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&nbs
Predictoor DF117 rewards available. DF118 runs Nov 28 — Dec 5th, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 117 (DF117) has completed.

DF118 is live today, Nov 28. It concludes on December 5th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF118 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF118

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF117 Completes and DF118 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 27. November 2024

KuppingerCole

Don’t Let the Endpoints Become the Entry Door for Attackers

Most cyberattacks are identity-based and come in via endpoints. Identity Security on one hand and Endpoint Protection on the other thus are cornerstones of every successful cybersecurity strategy. EPDR (Endpoint Protection, Detection & Response) has evolved as a unified approach that goes beyond traditional anti-malware and EPP (Endpoint Protection Platform) and adds detective and responsive c

Most cyberattacks are identity-based and come in via endpoints. Identity Security on one hand and Endpoint Protection on the other thus are cornerstones of every successful cybersecurity strategy. EPDR (Endpoint Protection, Detection & Response) has evolved as a unified approach that goes beyond traditional anti-malware and EPP (Endpoint Protection Platform) and adds detective and responsive capabilities. It also closely integrates with further detective and responsive technologies such as XDR (eXtended Detecton & Response).

In this webinar, Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will look at the status and future of EPDR, on what to consider when defining the own approach for an comprehensive and integrated approach on cybersecurity, and how the vendor landscape looks like. He will discuss the state of the market and the approaches on endpoint security, from unified solutions to integrating / orchestrating different best-of-breed solutions from EPP to Email Security and UEM (Unified Endpoint Management), but also the interplay of EPDR with SIEM, SOAR, and XDR.




Indicio

How decentralized digital identity is creating simpler, more streamlined air travel

The post How decentralized digital identity is creating simpler, more streamlined air travel appeared first on Indicio.
The inexorable growth in passenger numbers has driven the travel industry to an overwhelming conclusion: the only way forward is decentralized identity, digital wallets, and Verifiable Credentials. In the first trial of its kind, with Delta Airlines and the Government of Aruba, SITA and Indicio showed how well they work and how easy they are to implement.

By Trevor Butterworth

With yearly passenger numbers expected to grow from a little over four billion this year to eight billion by 2040 — or sooner — “the travel industry has decided that the future of identity is digital wallets and verifiable credentials,” said Michael Zureik, Head of Digital Travel Strategy and Innovation at SITA, at a recent Indicio Meetup.

To handle this demand and make travel less stressful and more streamlined, SITA — a global supplier of IT to airlines and airports — has worked with Indicio to develop trusted, authenticated, digital travel documents that can be seamlessly authenticated.

First implemented in Aruba, these digital credentials take the data embedded in a passport, combine it with liveness check, authenticate both, and then return the data to the passenger in the form of a Digital Travel Credential or DTC, a specification for deriving a digital passport from a physical passport’s embedded chip, established by the International Civil Aviation Organization (ICAO), a global body that regulates travel documents.

The Government of Aruba is the first sovereign government to accept a DTC, which means a passenger can present their DTC from home, get preauthorization for travel, and then cross the border simply by looking at a camera.

The focus of the Meetup was on the next steps in this digital journey, which were successfully completed with a recent trial combining an ICAO-compatible DTC with One ID, a digital credential standard created by the International Air Transport Association (IATA) for seamless airport and travel services (but which doesn’t include border crossing).

Working with Delta Airlines and the Government of Aruba, SITA and Indicio built a system to first create and issue the DTC and One ID then combined both credentials to add check-in, bag drop, lounge access, and boarding to booking, travel authorization, security, and immigration.

One of the key goals of the trial, said Zureik, was to show how different credentials can easily work together and complement each other to streamline the traveler experience on an international flight.

The trial showed just how easy it is to enroll in both at the same time with a passport and mobile phone, and then use them for instant, seamless authentication through each step of the passenger journey from home to destination.

What we learned, said Zureik, is that these technologies are ready. IATA’s One ID fits easily into the DTC ecosystem, and both were able to be implemented into airport, airline, and border gate processes quickly — six weeks — and without requiring any party to adopt new hardware or systems.

It was, said Zureik, “paramount” that Verifiable Credentials can be easily integrated and interoperate with existing airport infrastructure.

Mike Ebert, Indicio’s Director of Software Engineering, described how this was made possible by combining two types of Verifiable Credential formats — AnonCreds and SD-JWTs — and two types of communication protocols — DIDComm and OpenID4VC. This meant that the credentials were compatible with emerging European eIDAS identity standards but that they also provide strong privacy-preservation.

One key aspect of the Indicio-SITA implementation is that it is fully decentralized. Relying parties do not need to subscribe to or access a centralized database to verify traveler information. This unique architecture is not shared by other travel technology providers, but it is critical for the safe and secure management of people’s biometric data.

With a Verifiable Credential, biometric data doesn’t need to be centrally stored so that there’s something to check when a passenger looks into a camera. Instead, the passenger stores their biometric data securely on their phone and they can elect to share it in a way that can be verified without pinging stored data in a central repository.

As Ebert noted, this reduces the risk of a single point of failure (the database of biometric data goes offline), identity theft (if you don’t have to store people’s biometric data, it can’t be stolen), and it also eliminates the possibility of centralized tracking and provides the privacy protection that people and regulators now demand.

Acuity Market Intelligence described Indicio’s implementation of biometrics in Verifiable Credentials as “masterful” in its 2024 Prism Project Report.

To watch a demonstration of the DTC and One ID and learn more about the benefits of managing biometric data with Verifiable Credentials, and the wider use cases for “government-grade” digital identities in tourism, watch this fascinating episode of the Indicio Meetup.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post How decentralized digital identity is creating simpler, more streamlined air travel appeared first on Indicio.


Thales Group

Thales Alenia Space to lead Carb-Chaser project, the first French constellation to monitor human-induced CO₂ emissions

Thales Alenia Space to lead Carb-Chaser project, the first French constellation to monitor human-induced CO₂ emissions tas Wed, 11/27/2024 - 14:31 Cannes, November 27, 2024 – Thales Alenia Space, the joint venture between Thales (67%) and Leonardo (33%), is pleased to unveil the Carb-Chaser project. This innovative program is based on a constellation of new-generation satellites spe
Thales Alenia Space to lead Carb-Chaser project, the first French constellation to monitor human-induced CO₂ emissions tas Wed, 11/27/2024 - 14:31

Cannes, November 27, 2024 – Thales Alenia Space, the joint venture between Thales (67%) and Leonardo (33%), is pleased to unveil the Carb-Chaser project. This innovative program is based on a constellation of new-generation satellites specially designed to detect and measure human-induced carbon dioxide (CO₂) emissions, in particular from industrial sites.

Carb-Chaser’s compact architecture will combine efficiency and cost control to meet the needs of carbon monitoring markets.

The project is funded by the French government as part of the France 2030 stimulus plan and will allow the company to mature the payload, finalize constellation sizing and define the precursor satellite.

Image of the Earth from Meteosat Third Generation first imaging satellite (MTG-I1). Image unveiled by EUMETSAT & ESA in May 2023 © EUMETSAT & © ESA

Optical technology at the cutting edge of innovation

Each Carb-Chaser satellite will carry a hyper-compact multispectral interferometer. These highly innovative instruments are made possible by miniaturizing key technologies used in major programs such as the Meteosat geostationary weather satellites and Copernicus, with its 12 European environmental monitoring missions. They will offer the capability to locate individual CO₂ plumes and attribute their source to a specific industrial facility, even in complex atmospheric conditions (wind, aerosols, water vapor, etc.).

Based on a Thales Alenia Space proprietary patented technology, this multispectral interferometry approach marks a breakthrough in terms of dependability and performance, further enhancing the ability of these satellites to offer reliable operational data with shorter revisit cycles to establish an overview of industrial sites on a global scale.

Certified data for strategic applications

The future Carb-Chaser constellation will provide independent, verifiable and certified CO₂ measurements spanning the entire value chain, including carbon services markets. Thanks to its high-precision geolocation capability, emissions will be accurately attributed to specific industrial facilities. These data will then be verified by in-situ surveys performed directly at the sites concerned.

Measurements will also be certified by the French Space Agency (CNES), the European Space Agency (ESA) and scientific experts specializing in atmospheric studies. This official validation will ensure these data can be included in regulatory frameworks, especially for carbon quota systems and carbon border adjustment mechanisms.

Complementary fit with major European space missions for environmental monitoring

Carb-Chaser will operate in synergy with existing European programs dedicated to measuring carbon emissions, such as MicroCarb and CO2M. While MicroCarb is a scientific mission to assess CO₂ fluxes on a global scale, and CO2M will measure human-induced CO₂ on a regional scale, Carb-Chaser will monitor local-scale emissions. Carb-Chaser data will also be used in conjunction with data from the CO2M program to compile atmospheric inventories and track progress toward climate goals.

These three missions, while distinct, will complement and feed into each other to provide a global, integrated picture of carbon emissions and support international efforts to reduce the impact of human activities on the climate.

 

A French consortium at the heart of industrial innovation

Led by Thales Alenia Space, the Carb-Chaser project brings together a consortium of dynamic French SMEs such as U-Space, WaltR, Everimpact, SPASCIA and QAIrbon, as well as the IRT Saint Exupéry technological research institute. Together, the partners will combine their expertise to accelerate the ecological transition while strengthening Europe’s technological sovereignty. In addition to the scientific advances it will deliver, Carb-Chaser will have a direct economic impact on key regions of the French space industry, including Occitania, Brittany and the PACA region (Provence-Alpes-Côte d’Azur).

As part of the ambitious France 2030 strategy, Carb-Chaser reflects France and Europe’s determination to become world leaders in carbon emissions monitoring. This pioneering program marks a decisive step forward in efforts to combat climate change and opens the way to new markets for space technologies.

About France 2030

Devised in consultation with local and European business and academic partners, France 2030 offers the country exceptional resources to meet the ecological, demographic, economic, industrial and social challenges of today’s changing world. This unprecedented plan for innovation and industry reflects a dual ambition. First, sustainably transform key sectors of our economy — such as energy, automotive, aerospace, digital and space — through innovation and industrial investment. And second, position France not just as a player, but as a leader in the economy of the future.

ABOUT THALES ALENIA SPACE
Drawing on over 40 years of experience and a unique combination of skills, expertise and cultures, Thales Alenia Space delivers cost-effective solutions for telecommunications, navigation, Earth observation, environmental management, exploration, science and orbital infrastructures. Governments and private industry alike count on Thales Alenia Space to design and build satellite-based systems that provide anytime, anywhere connections and positioning, monitor our planet, enhance management of its resources and explore our Solar System and beyond. Thales Alenia Space sees space as a new horizon, helping to build a better, more sustainable life on Earth. A joint venture between Thales (67%) and Leonardo (33%), Thales Alenia Space also teams up with Telespazio to form the parent companies’ Space Alliance, which offers a complete range of services. Thales Alenia Space posted consolidated revenues of approximately €2.2 billion in 2023 and has around 8,600 employees in 8 countries, with 16 sites in Europe.

/sites/default/files/database/assets/images/2022-10/New_Banner.jpg 27 Nov 2024 Thales Alenia Space Space to observe and protect Type Press release Structure Space Cannes, November 27, 2024 – Thales Alenia Space, the joint venture between Thales (67%) and Leonardo (33%), is pleased to unveil the Carb-Chaser project. This innovative program is based on a constellation of new-generation satellites specially designed to d... Hide from search engines Off Don’t overwrite with Prezly data Off Canonical url https://www.thalesaleniaspace.com/en/press-releases/thales-alenia-space-lead-carb-chaser-project-first-french-constellation-monitor

IIT Madras and Thales unveil top 6 teams developing eco-friendly technology for Carbon Zero Challenge (CZC 4.0)

IIT Madras and Thales unveil top 6 teams developing eco-friendly technology for Carbon Zero Challenge (CZC 4.0) Language English piyush.prakash Wed, 11/27/2024 - 07:13 Thales’s CSR project “Carbon Zero Challenge 4.0” with IIT Madras sees six teams emerge on the top, These teams will receive a start-up seed funding of up to ₹10 lakh to develop their pr
IIT Madras and Thales unveil top 6 teams developing eco-friendly technology for Carbon Zero Challenge (CZC 4.0) Language English piyush.prakash Wed, 11/27/2024 - 07:13 Thales’s CSR project “Carbon Zero Challenge 4.0” with IIT Madras sees six teams emerge on the top, These teams will receive a start-up seed funding of up to ₹10 lakh to develop their prototypes further, They were selected from the initial 25 teams who embarked on a rigorous six-month journey to develop sustainable prototypes across various sectors, including energy, materials, agriculture, air, and water.

Indian Institute of Technology Madras (IIT Madras), in association with Thales, has announced the top six teams developing eco-friendly technology from the fourth cohort of the Carbon Zero Challenge (CZC 4.0), a nationwide contest to boost innovation in this sector. As part of Thales’s CSR and solidarity efforts in India, Thales supported this transformative eco-innovation and entrepreneurship challenge.

The top six teams will receive a start-up seed funding of up to ₹10 lakh. One other team has also been recognised with a ‘special mention’ for notable achievements. The CZC challenge aims to accelerate groundbreaking solutions to address climate change and foster sustainability. The third edition was supported by Thales and Aquamap (Centre for Water Management and Policy at IIT Madras) and reached out to over 1,600 students and researchers from 600 universities and 270 start-ups across India. In line with Thales’ Environmental, Social, and Governance (ESG) strategy, Thales supported the programme for a second consecutive year, for its fourth edition, showcasing its commitment to building a safer, greener, and more inclusive world.

The final teams were shortlisted from the initial list of 25 teams announced in April 2024. These teams embarked on a rigorous six-month journey to develop sustainable prototypes across various sectors, including energy, materials, agriculture, air, and water. These teams were mentored on business aspects by Sustainability Mafia, a leading community of climate entrepreneurs out of India. They were invited to IIT Madras to showcase their innovations at the CZC 4.0 Grand Expo, held from 26–28 October 2024. The CZC initiative encourages deep-technology and circular economy solutions to combat pressing environmental issues. To date, the CZC has supported around 100 prototypes.

 

“Thales is proud to have supported IIT Madras’ Carbon Zero Challenge, an initiative that not only stimulates transformative eco-innovation but also resonates closely with our vision of advancing sustainable solutions for the future. Our collaboration on CZC with IIT Madras has been to empower young innovators across India to address critical environmental challenges and pave the way for development of impactful, resource-efficient technologies, in line with our long-term commitment towards nurturing a cleaner, greener world. We congratulate all the participants and look forward to the continued progress of these exceptional teams in shaping a sustainable tomorrow”, said Mr. Ashish Saraf, VP and Country Director for India, Thales.

 

Prof. Indumathi Nambi, Coordinator, Carbon Zero Challenge Coordinator, IIT Madras, said “With a focus on fostering eco-startups addressing global challenges like climate change, pollution, and food security, CZC 4.0 attracted over 2,000 participants from 775 universities and 430 startups across India. Thirty startups have emerged from CZC’s previous cohorts, with another 35 advancing toward commercialisation. Participating teams received up to ₹500K in funding and mentorship to develop their prototypes.”
Prof. Indumathi Nambi added, “With the success of CZC 4.0, IIT Madras and its partners continue to push the boundaries of innovation, fostering a generation of startups ready to address some of the world’s most pressing environmental issues through sustainable technology.”

The top six teams and the team with special mention recognised for their contributions to sustainable technology are:

Gudlyf Mobility Pvt Ltd – H2ARWASTE: Developing hydrogen storage cylinders using agricultural waste. EESAN – CBG for Sustainability: Enabling cleaner bio-methane for homes and small businesses. Electropulse Innovations – Wastewater Treatment: Using high-voltage pulse generators for efficient wastewater management. Thaal Chemy Innovations Pvt Ltd – Sustainable Packaging: Producing nano-cellulose from agricultural residues. ReWinT – End-of-Life Turbine Blades: Transforming wind turbine blades using eco-friendly chemical and thermal processes. Chrissron Biomass Solutions – Plant-Based Resin: Manufacturing sustainable resin from plant waste.

Special Mention
Team YoTuh Energy was awarded a special mention for their groundbreaking electrified refrigeration technology for cold logistics vehicles, highlighting their rapid traction in investment and commercialisation.

The Photographs of the Top Six Teams with their Prototypes can be viewed here.

The Carbon Zero Challenge represents an unparalleled opportunity for the brightest minds to converge, channel their creativity, and contribute meaningfully towards shaping a sustainable future for our planet. Together, we can ignite the spirit of entrepreneurship, protect our environment, and move closer to achieving a carbon-neutral world.

 

About Thales
Thales (Euronext Paris: HO) is a global leader in advanced technologies specialising in three business domains: Defence & Security, Aeronautics & Space and Cybersecurity & Digital identity.
It develops products and solutions that help make the world safer, greener and more inclusive.
The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.
Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion. About Thales in India
Present in India since 1953, Thales is headquartered in Noida and has other operational offices and sites spread across Delhi, Bengaluru and Mumbai, among others. Over 2200 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Aerospace and Cybersecurity & Digital Identity markets. Thales has two engineering competence centres in India - one in Noida focused on Cybersecurity & Digital Identity business, while the one in Bengaluru focuses on hardware, software and systems engineering capabilities for both the civil and defence sectors, serving global needs. About IIT Madras
Indian Institute of Technology Madras (IITM) was established in 1959 by the Government of India as an ‘Institute of National Importance.’ The activities of the Institute in various fields of Science and Technology are carried out in 18 academic departments and several advanced interdisciplinary research academic centres. The Institute offers undergraduate and postgraduate programmes leading to B.Tech., M.Sc., M.B.A., M.Tech., M.S., and Ph.D., degrees in a variety of specialisations. IITM is a residential institute with more than 600 faculty and 9,500 students. Students from 18 countries are enrolled here. IITM fosters an active entrepreneurial culture with strong curricular support and through the IITM Incubation Cell.
Recognized as an Institution of Eminence (IoE) in 2019, IITM has been ranked No.1 in the ‘Overall’ Category for the sixth consecutive year in India Ranking 2024 released by National Institutional Ranking Framework, Ministry of Education, Govt. of India. The Institute has also been ranked No.1 in the ‘Engineering Institutions’ category in the same Rankings for nine consecutive years – from 2016 to 2024. It was also adjudged as the ‘Top innovative Institution’ in the country in Atal Ranking of Institutions on Innovation Achievements (ARIIA) in 2019, 2020 and 2021. ARIIA Ranking was launched by the Innovation Cell of Ministry of Education.
Follow IIT Madras on FACEBOOK / TWITTER / LINKEDIN / INSTAGRAM / YOUTUBE     /sites/default/files/database/assets/images/2024-11/Top%20teams%20of%20IIT%20Madras%20CZC4.0.JPG Contacts Pawandeep KAUR, Thales, Communications in India 27 Nov 2024 India Corporate Responsibility India Type News Hide from search engines Off

Tuesday, 26. November 2024

KuppingerCole

Building Secure APIs with Standards like FAPI, OAuth2, and OpenID Connect

This Videocast episode explores the complexities and advancements in digital identity standards, focusing on FAPI, OAuth, and OpenID Connect. Martin Kuppinger and Joseph Heenan, CTO of Authlete, discuss the origins and purpose of FAPI, its adoption across various regions, and its significance in enhancing security and interoperability in financial services. They also highlight the role of Authlete

This Videocast episode explores the complexities and advancements in digital identity standards, focusing on FAPI, OAuth, and OpenID Connect. Martin Kuppinger and Joseph Heenan, CTO of Authlete, discuss the origins and purpose of FAPI, its adoption across various regions, and its significance in enhancing security and interoperability in financial services. They also highlight the role of Authlete in simplifying the implementation of these standards for developers and the emerging trends in decentralized identity and verifiable credentials. 




2024 PAM Market Insights & Vendor Analysis

Join us for a comprehensive webinar on the 2024 Leadership Compass for Privileged Access Management (PAM), where we’ll unpack the latest insights and vendor evaluations shaping the PAM landscape. Discover which vendors lead the market in innovation, product strength, and scalability, and explore emerging capabilities like Just-in-Time (JIT) access and Cloud Infrastructure Entitlement Management (C

Join us for a comprehensive webinar on the 2024 Leadership Compass for Privileged Access Management (PAM), where we’ll unpack the latest insights and vendor evaluations shaping the PAM landscape. Discover which vendors lead the market in innovation, product strength, and scalability, and explore emerging capabilities like Just-in-Time (JIT) access and Cloud Infrastructure Entitlement Management (CIEM). Gain a deeper understanding of how PAM solutions can secure critical assets across multi-cloud and on-premises environments and learn best practices for selecting a solution that aligns with your organization’s security and compliance needs.

Takeaways:

Key leaders in Privileged Access Management and what sets them apart How innovations like JIT access and CIEM are reshaping PAM Insights into market trends and emerging PAM solutions Essential capabilities buyers should consider in PAM tools Selecting the right PAM solution for scalability and future-proofing


liminal (was OWI)

The Age-Verified Internet Strategies for Safer Online Experiences

The post The Age-Verified Internet Strategies for Safer Online Experiences appeared first on Liminal.co.

Spruce Systems

Key Topics Shaping the Future of Digital Identity

Learn about some of the top topics the identity industry is abuzz with and our takeaways from the most recent Internet Identity Workshop.

Earlier this month, members of the SpruceID team attended the 39th installment of the Internet Identity Workshop, or IIW. The biannual conference is on the cusp of its third decade, and one reason we love it is that it still embodies the collaborative, grassroots spirit of its 2005 founding era. This ethos still guides the open, standards-based world of identity, making IIW an ideal place to check in on important developments and ideas.

Held at the Computer History Museum near Google’s campus in Mountain View, California, each installment of IIW begins with an iconic opening circle to introduce all attendees and solicit session proposals on the fly. This “unconference” structure means you never know quite what you’ll see at IIW until it starts – and input was on hand from every corner, from Microsoft and Google to solo entrepreneurs and the government administrators bringing the next wave of identity to life.

Over 3 days and 177 discussion sessions, the breadth of topics at IIW was huge, but a few key themes came into focus for us: Collaboration and convergence across groups of stakeholders; the new concept of “personhood credentials”; and the cryptographic innovation known as Zero Knowledge Proofs.

Personhood Credentials

SpruceID may be a bit biased, since Wayne Chang, SpruceID CEO was part of the team that rolled out the idea in a whitepaper earlier this year, but it was gratifying to see a lot of discussion of “personhood credentials” at IIW. Broadly, a personhood credential is a kind of verifiable digital credential that shows its holder is a natural person, without revealing other personal information. The broad goal is to help combat disinformation and spam online by making it easy to identify content posted by a real human.

Various breakouts delved into technical details of implementing personhood credentials and potential applications, such as in “know your customer” procedures for financial services. But the discussion also turned to deeper quandaries, including a challenge to the premise of the personhood credential: Why is the burden of proof on humans? One panel on “approved AI agents” explored how digital credentials could be used to identify autonomous agents online, sparking a lively debate about trust and accountability in the digital age.

Zero Knowledge Proofs

Finally, there was both a wealth of discussion and a significant piece of news about Zero Knowledge Proofs, a recent innovation in privacy-enhancing cryptography. ZKPs remain largely theoretical, but would make it possible to do something wildly interesting with digital information: prove the truth of a claim, without revealing the underlying details. For example, a ZKP-based digital credential could be presented at a bar to prove you’re old enough to buy a drink, without revealing your specific date of birth.

There were sessions presenting things like digital wallets built to handle ZKPs in verifiable credentials, and software libraries that will make the technology easier to implement. But probably most exciting was the announcement that Google is hard at work on building ZKPs that work on the mDoc credential standard, using existing hardware. The search giant announced that it plans to make their techniques open-source in early 2025, which could spark a flurry of further technical advances and even real-world implementations.

Collaboration in Overdrive

The most interesting part of IIW 39 may not have been a technical topic at all, but the powerful mixing and collaboration on display everywhere you looked. That’s particularly notable because attendees represented a cross-section of the identity world in every sense. Founders of the first IIW 20 years ago rubbed elbows with young engineers bringing those pioneers’ ideas to life.  Government staffers leading the charge on mobile drivers’ licenses shared their issues with vendors building the tools they need. 

Perhaps most importantly, representatives of key standards came together to share insights and reconcile approaches. Teams developing mDocs/mDL, OpenID, European identity groups, the Open Wallets Foundation, and the Decentralized Identity Foundation kibitzed and compared notes with an eye towards getting these systems to work together better. With everyone in the same room, there were significant breakthroughs in finding shared solutions to sticky problems.

With digital identity gaining real-world traction in recent years, it's exciting to see the spectrum for digital credentials shifting from being discussions of theory, to competing standards, to now who wins/how we all win by moving into the marketplace of implementation.

From Theory to Practice

This new work on ZKPs embodies perhaps the single overarching theme of IIW 39: a lot of things that the identity world has spent years talking about and refining are now actually becoming reality. For example, we saw intense interest in our work on California’s mobile driver’s license (though again, we’re biased) and discussion of strong privacy requirements that would be considered for Utah’s state digital ID. Utah appears to be truly committed to protecting the privacy of users, a gratifying payoff to decades of commitment to safety-centric design in digital ID.

The passage from ideas to reality even capped off this year’s IIW: many attendees went from Mountain View straight to Sacramento, where the California DMV was hosting an mDL hackathon. After days focused on aligning the principles and architecture of digital identity, developers were able to get their hands dirty building real-world tools and exploring ways digital identity can fulfill its promise of better security, privacy, and trust in the digital age.

Turning Conversations into Solutions

Wrapping up these recent discussions, we're energized by the progress and collaboration happening in the world of digital identity. The transition from theory to implementation is becoming more tangible, with advancements like personhood credentials and Zero-Knowledge Proofs paving the way for privacy-first solutions. As we continue to tackle challenges and refine standards, we’re excited to contribute to a future where secure, user-controlled digital interactions are the norm.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


SC Media - Identity and Access

Here’s what to know about Google Cloud mandating MFA by end of 2025

Google’s move will spur other providers to encourage MFA – and that’s a positive development for our industry.

Google’s move will spur other providers to encourage MFA – and that’s a positive development for our industry.


Five ways to manage NHIs

Here’s five ways to manage NHIs – a blind spot in identity security.

Here’s five ways to manage NHIs – a blind spot in identity security.


Elliptic

Crypto regulatory affairs: UK Government to progress regulatory updates for stablecoins and crypto from early 2025

The United Kingdom is planning to consult with the private sector on a planned regulatory framework for stablecoins, along with other crypto-related regulatory updates, from early 2025. 

The United Kingdom is planning to consult with the private sector on a planned regulatory framework for stablecoins, along with other crypto-related regulatory updates, from early 2025. 


Dock

Dock and Youverse Partner to Advance Private and Secure Biometric Authentication

We’re excited to announce our partnership with Youverse, a pioneer in privacy-preserving biometric solutions. Youverse is redefining biometric authentication with its innovative, decentralized, and zero-knowledge architecture. Unlike traditional centralized databases, Youverse’s approach eliminates the risks associated with storing sensitive biometric data in a single location.

We’re excited to announce our partnership with Youverse, a pioneer in privacy-preserving biometric solutions.

Youverse is redefining biometric authentication with its innovative, decentralized, and zero-knowledge architecture. Unlike traditional centralized databases, Youverse’s approach eliminates the risks associated with storing sensitive biometric data in a single location. Their cutting-edge technology, independently certified as top-tier for accuracy by international benchmarks, ensures unmatched levels of privacy, accuracy, and trust. Using multi-party computation, biometrics are securely distributed across nodes and only reconstructed on the user’s device, enhancing both security and convenience.

Through this partnership, Youverse will leverage Dock’s verifiable credential capabilities to empower relying parties within closed ecosystems to securely verify consumer data, ensuring that sensitive information remains under the sole control of its rightful owner. Any ecosystem participants using Dock will now be able to verify credential data that is biometrically bound to an individual’s unique biometric while never exposing biometric information which is securely owned by the user’s device.

Together, Dock and Youverse are setting a new standard for secure, private, and user-controlled biometric authentication. 

Stay tuned as we work to revolutionize identity verification with our combined technologies.


KuppingerCole

Jan 15, 2025: Synthetic Data Market Analysis and Analyst Insights

Join us for a webinar on KuppingerCole’s latest Leadership Compass report on Synthetic Data. Discover how synthetic data is revolutionizing data security by mitigating risks associated with real data usage. We'll explore the leading capabilities in this space, examine innovative approaches, and discuss how synthetic data is being applied across industries to enhance machine learning models, ensure
Join us for a webinar on KuppingerCole’s latest Leadership Compass report on Synthetic Data. Discover how synthetic data is revolutionizing data security by mitigating risks associated with real data usage. We'll explore the leading capabilities in this space, examine innovative approaches, and discuss how synthetic data is being applied across industries to enhance machine learning models, ensure compliance, and protect sensitive information.

Aergo

Blocko, Aergo’s Key Technical Partner, Earns Top Tech Rating: Advancing Enterprise Blockchain…

Blocko, Aergo’s Key Technical Partner, Earns Top Tech Rating: Advancing Enterprise Blockchain Innovation We’re thrilled to share that Blocko, Aergo’s trusted technical partner, has achieved a TI-3 rating in Korea Evaluation Data’s(KoDATA) Tech Credit Bureau(TCB) assessment. This recognition highlights Blocko’s solid technical foundation, business potential, and innovative capabilities. The TCB r
Blocko, Aergo’s Key Technical Partner, Earns Top Tech Rating: Advancing Enterprise Blockchain Innovation

We’re thrilled to share that Blocko, Aergo’s trusted technical partner, has achieved a TI-3 rating in Korea Evaluation Data’s(KoDATA) Tech Credit Bureau(TCB) assessment. This recognition highlights Blocko’s solid technical foundation, business potential, and innovative capabilities.

The TCB rating system, designed to evaluate a company’s technology, market potential, and scalability, ranks from TI-1 to TI-10. Achieving a TI-3 rating positions Blocko within the top tier of innovative enterprises. This rating also qualifies Blocko for KOSDAQ — the secondary trading board of the Korea Exchange — under its tech-specialized listing program.

Driving Innovation with Advanced Enterprise Manager (AEM)

Blocko’s Advanced Enterprise Manager(AEM), which provides real-time infrastructure monitoring, seamless node operations, chain-specific permissions, and administrative features, has been a game-changer for blockchain infrastructure management. The capabilities make blockchain deployment and management significantly more accessible and efficient, allowing enterprises to focus on building value rather than navigating technical complexities.

A Shared Vision for the Future

Blocko has been a foundational partner for Aergo since the platform’s beginning, working hand-in-hand to develop and deploy enterprise blockchain solutions. This partnership allows Aergo to focus on its mission of delivering enterprise-grade blockchain networks and related solutions while leveraging Blocko’s technical expertise in infrastructure and node management.

This milestone is just the beginning. As Blocko continues to grow, Aergo remains committed to supporting its technical partner and expanding the impact of our blockchain platform. By combining Aergo’s vision with Blocko’s technical expertise, we are creating an ecosystem that drives meaningful transformation in the blockchain space.

Blocko, Aergo’s Key Technical Partner, Earns Top Tech Rating: Advancing Enterprise Blockchain… was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aergo Network Voting Rewards and What’s Next

Since its launch, the Aergo Network Voting Reward program has been a cornerstone of our efforts to engage and reward the community and network participants. Designed to foster participation in staking and governance, the program has grown stronger with each passing year, thanks to the dedication of our users. A Quick Recap of the Program’s Journey: 2019–2020: Launch of the first staking&nb

Since its launch, the Aergo Network Voting Reward program has been a cornerstone of our efforts to engage and reward the community and network participants. Designed to foster participation in staking and governance, the program has grown stronger with each passing year, thanks to the dedication of our users.

A Quick Recap of the Program’s Journey: 2019–2020: Launch of the first staking rewards 2020–2021: Continuation of the program 2021–2022: The third phase upheld previous terms 2022–2023: Extension of rewards 2024: Community vote to determine the next phase of rewards What’s Next?

The voting for the next phase of the Aergo Network Voting Reward program has officially begun! We remain committed to fostering a network built on collaboration and engagement. By casting your vote, you directly steer the next chapter of Aergo’s growth.

Get involved today and make your voice heard!

Aergo Voting

Aergo Network Voting Rewards and What’s Next was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 25. November 2024

KuppingerCole

Overcoming Stress and Building Resilience in a High-Stakes Environment

In this webinar on Mental Health in Cybersecurity, we'll explore the unique challenges faced by professionals in this high-stakes field. As cyber threats continue to evolve and intensify, so does the pressure on those tasked with defending against them. This constant state of vigilance, coupled with the potential catastrophic consequences of a breach, creates an environment ripe for stress, burnou

In this webinar on Mental Health in Cybersecurity, we'll explore the unique challenges faced by professionals in this high-stakes field. As cyber threats continue to evolve and intensify, so does the pressure on those tasked with defending against them. This constant state of vigilance, coupled with the potential catastrophic consequences of a breach, creates an environment ripe for stress, burnout, and other mental health issues.

In this session, we'll look at specific factors that make cybersecurity particularly susceptible to mental health challenges, from the relentless pace of technological change to the weight of responsibility in protecting sensitive data. We'll also discuss practical strategies for individuals and organizations to promote mental wellness in the cybersecurity workplace, drawing on insights from industry experts and the latest research in occupational psychology.




IDnow

5 takeaways from the ‘Why you’re doing remote onboarding wrong’ webinar.

A simple compliance requirement that needs to be completed asap or the most important part of the customer journey? Industry experts have their say. The customer onboarding stage is arguably the most important touch point between a bank and its customer. Unfortunately, it’s also the part that a lot of banks are doing wrong, which […]
A simple compliance requirement that needs to be completed asap or the most important part of the customer journey? Industry experts have their say.

The customer onboarding stage is arguably the most important touch point between a bank and its customer. Unfortunately, it’s also the part that a lot of banks are doing wrong, which can lead to compliance fines, fraud attacks and loss of reputation and revenue. 

To discuss how to create an optimal onboarding experience that can both fight fraud and improve user experience, we partnered with Transform Finance to organize the ‘Why you’re doing remote onboarding wrong’ webinar. 

Moderated by our very own Head of Product Marketing and Customer Communications, Ellie Burns, the webinar featured insights from Anna Stylianou, Founder of AML Cube and Bertram Storr-Paulsen, Head of KYC Services LC&I Denmark, Nordea. 

Now available on-demand, the hour-long webinar covers a variety of topics, from how to meet regulatory compliance requirements and secure your onboarding process to how to build a trustworthy and convenient user experience. 

Missed the webinar? Here’s our five key takeaways!

Why you’re doing remote onboarding wrong Are you doing onboarding wrong? Our eclectic mixture of panelists challenge conventional methods and reveal innovative ways to balance fraud prevention with user convenience. Watch now 1. Relationships with regulators need to improve.

“We’re not even halfway there,” appeared to be the general feeling during the webinar. However, according to participants, there are signs that the relationship between banks and regulators is trending in the right direction.  

For example, the Anti-Money Laundering Authority in Frankfurt (AMLA) is likely to make a significant impact, offering a more unilateral and consistent regulatory perspective. Having an open environment where banks can discuss what is failing; what is working well; and how learnings can be exchanged will only improve the relationship between banks and regulators regarding anti-money laundering and other fraud risks.

2. Change is inevitable. Is your culture ready to adapt accordingly?

As one of the biggest challenges of entering new markets is complying with different regulations and requirements, banks need to have a culture that is willing to innovate quickly and adapt accordingly. 

Onboarding customers the old way – via traditional Know Your Customer (KYC) processes like photocopying and stamping documents – is no longer acceptable; banks need to embrace new identity verification methods and onboarding procedures to fight fraud and cut costs in the modern age. 

For some reason, in financial services, compliance and fraud are always treated separately, with separate teams etc, but they’re actually inextricably linked, especially during the onboarding stage.  

Head of Product Marketing and Customer Communications, Ellie Burns

“So, how can banks ensure compliance teams and fraud teams are collaborating to make processes and onboarding flows as secure, trusted and seamless as possible for the customer?” asked Ellie.

3. The importance of balancing fraud and compliance.

Balancing effective fraud prevention with an onboarding process that works for all, regardless of risk level, is no easy task. The reality is that some customers are just not tech savvy or literate enough to use certain solutions, so banks need to be prepared to still ‘handhold’ the more vulnerable customers and offer more traditional onboarding options too. 

However, it’s also important to remember that when doing so, banks need to be prepared for the repercussions of these ‘old-fashioned’ approaches, such as increased fraud. 

 Communication is key, especially between different departments with different goals. However, despite appearances, there are many common goals that should demand alignment between compliance teams and fraud teams. As such, these departments need to work together to find solutions that are both compliant and prevent fraud. It should never be an either/ or.

4. Technology has a major part to play. But so too do humans!

There is no such thing as the ‘perfect onboarding solution’, not only due to different regulatory requirements but also customer preferences. Even across the European region there are different approaches to customer onboarding. For example, in the UK there are data checks and document checks, while in Germany, there is video verification and of course eID.  It’s also worth bearing in mind that as people tend to be multi-bankers, they’re often faced with multiple ‘perfect solutions’ following different standards and steps. Plus, of course, what is perfect for one person may not be for another. 

While a lot of solutions focus on things like multi-factor authentication to make onboarding smoother, faster and more automated, there are other solutions that use live facial detection, which requires in-person assistance and face-to-face interaction.  

Webinar participants agreed that there is still a danger in relying too heavily on fully automated processes as artificial intelligence is still not to be 100% trusted yet, especially with social engineering like APP scams. The value that video verification can provide should not be underestimated.

5. Is fraud prevention just a game of Whac-A-Mole?

Of course, the aim of banks is to stop fraud dead in its tracks – but the reality is that fraud is a constant, never-ending game of cat and mouse, with fraudsters evolving and developing ever-sophisticated attacks to penetrate defenses and vulnerabilities. So, if the banks can’t stop 100% of fraud, then the aim of the game is to make it as difficult as possible 

So, how exactly can banks make it difficult for fraudsters? As bad actors often work around the clock using increasingly sophisticated fraud attacks, it can be difficult for banks to fight back. As such, there needs to be a more connected approach where banks are incentivized and feel willing to share information and learnings with one another. A multi-layered approach to fraud prevention is necessary as fraudsters work in a similar way!

Check out other webinar wrap-ups, like ‘5 takeaways from the ‘Sign of the times: The digital signature revolution’ webinar.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Indicio

AI, Chat, Decentralized Identity, and Digital Travel

The post AI, Chat, Decentralized Identity, and Digital Travel appeared first on Indicio.

SC Media - Identity and Access

Identity Challenges in Manufacturing - Tammy Klotz - CSP #202


Spherical Cow Consulting

Every Company is an IAM Company

Full disclosure: this post is all about encouraging you to become a member of IDPro®. They didn’t ask me to write it, but they’re a client, and I’ve been a fan long before we started working together. If your company has a digital presence, at some level, you’re a technology company. That might not seem… Continue reading Every Company is an IAM Company The post Every Company is an IAM Company ap

Full disclosure: this post is all about encouraging you to become a member of IDPro®. They didn’t ask me to write it, but they’re a client, and I’ve been a fan long before we started working together.

If your company has a digital presence, at some level, you’re a technology company. That might not seem obvious if you’re running a retail chain or a healthcare clinic, but it’s true. The systems that underpin your operations—whether they’re managing customer logins, ensuring secure employee access, or verifying partner identities—are deeply rooted in the principles of digital identity.

And yet, for many organizations, the people working on these systems may not even realize they’re part of a larger field called “identity management.” They’re solving complex problems in access control, authentication, and identity proofing—often in isolation, without the benefit of shared best practices or peer guidance. That inefficiency? It makes me sad—especially because there’s a better way.

That’s where professional organizations like IDPro come in. I’ve been an IDPro member since long before I started contracting with them, and their value to practitioners and companies alike is impressive.

Why IDPro?

What is IDPro? It’s a professional association dedicated to fostering ethics and excellence in digital identity management. It provides tools, resources, and a vibrant community for anyone working in this space, whether they’re seasoned veterans or just discovering that identity is part of their job.

IDPro offers a lot:

1. A Vendor-Neutral Body of Knowledge

IDPro’s freely available Body of Knowledge (BoK) is a Big Deal. It’s a Creative Commons-licensed, vendor-neutral resource designed to help identity practitioners level up their expertise. Whether you’re trying to understand the basics of multi-factor authentication or grappling with complex federation protocols, the BoK offers practical, accessible guidance. I helped design the publication process, ensuring the BoK includes only the best, peer-reviewed material in the field.

2. CIDPRO® Certification

For companies that want to invest in their staff’s professional development, the CIDPRO certification is an excellent foundation-level credential. It’s built by practitioners, for practitioners, and is designed to validate a broad understanding of identity management concepts. Certified employees bring enhanced credibility and capability to their organizations. There are great resources on the website that walks you through what the exam is like and the material it covers. You can even sign up to take a test exam!

3. A Thriving Community

IDPro’s member-only virtual discussions are some of the most valuable professional conversations I’ve had. The organization fosters a community where identity practitioners can exchange ideas, troubleshoot challenges, and share best practices in a thoughtful and respectful environment. This sense of connection is invaluable, especially in a field as fast-changing and complex as identity management. From authentication and IGA to identipets (because what’s a community without pet photos?), the channels are where some of the best identity conversations—and laughs—happen.

4. Advocacy and Networking

IDPro isn’t just about individual growth—it’s about lifting the entire field of identity management. From advocating for the industry with policymakers to hosting facilitated gatherings at major conferences like Identiverse, IDPro helps promote the conversation around identity while providing meaningful networking opportunities. Whether it’s a hallway chat at a conference or a casual meetup at the pub, the networking opportunities here are second to none.

Why Companies Should Care

When companies invest in organizations like IDPro, they’re not just supporting their employees—they’re ensuring their own success. Digital identity is foundational to security, compliance, and customer trust. By giving your teams access to IDPro’s resources, certification programs, and community, you’re equipping them to solve identity challenges more effectively. Go, team!

And let’s not forget: investing in professional development helps attract and retain top talent. Identity professionals who feel supported and connected are more likely to stay with your organization and bring their best to the table.

A Call to Action

If you’re reading this and wondering if IDPro membership is right for your organization, of COURSE it is. I would go so far as to say that if you’re not ready to dive into the deep end of standards development, this is a great way to start engaging with the smartest people in the identity world. Every company with a digital presence has a stake in the future of identity management; IDPro is the best partner I can imagine to help you navigate that future. Membership offers value at every level, from individual practitioners to corporate teams.

Seriously, get your company an IDPro membership—you won’t regret it. And when you do, tell the team at membership@idpro.org that I sent you!

The post Every Company is an IAM Company appeared first on Spherical Cow Consulting.


PingTalk

Boost Bank Launches Embedded Digital Bank App for Malaysians

Boost Bank Launches Embedded Digital Bank App in Five Months on PingOne Advanced Identity Cloud

 

Embedded banking is the answer to helping underserved communities to become financially healthy and make life easier, helping individuals to more easily save money, pay bills, receive discounts on products and services and more. As part of this effort to help underserved communities, Boost Bank launched with incredible success in the summer of 2024 and Boost Bank Chief Technology Officer Dr. Steven Gan talked us through the process of creating this wildly successful digital bank from scratch.

 

Boost Bank is a collaboration between Axiata and RHB Bank, Malaysia’s 5th largest bank, and one of five banks awarded a digital banking license from the Malaysian government. It is also the only digital bank where the Malaysian Prime Minister Anwar Ibrahim officiated the launch. 

Sunday, 24. November 2024

KuppingerCole

Beyond the Firewall: Proactive Cybersecurity with CTI and ASM

Join Matthias Reinwarth and Alexei Balaganski as they dive into the changing world of cybersecurity. In this episode, they talk about Cyber Threat Intelligence (CTI) and Attack Surface Management (ASM), exploring how security is moving from old-school models to more proactive, real-time threat detection. They also discuss how AI is shaking things up in cybersecurity and why understanding the dark

Join Matthias Reinwarth and Alexei Balaganski as they dive into the changing world of cybersecurity. In this episode, they talk about Cyber Threat Intelligence (CTI) and Attack Surface Management (ASM), exploring how security is moving from old-school models to more proactive, real-time threat detection. They also discuss how AI is shaking things up in cybersecurity and why understanding the dark web is more important than ever. The takeaway? Organizations need to tap into expert CTI and ASM services to stay ahead of today’s complex cyber threats.



Friday, 22. November 2024

Anonym

Anonyome Wins Prestigious SuperNova Award for Digital Wallet that Will Transform Agriculture

A digital wallet co-developed by Anonyome Labs and Indico which will transform trusted data sharing in the agriculture industry has won a prestigious Constellation Research SuperNova Award. The Trust Alliance New Zealand (TANZ) Digital Farm Wallet took out the award in the “Digital Safety, Governance, Privacy, and Cybersecurity” category of the long-running awards program which […] The post Anon

A digital wallet co-developed by Anonyome Labs and Indico which will transform trusted data sharing in the agriculture industry has won a prestigious Constellation Research SuperNova Award.

The Trust Alliance New Zealand (TANZ) Digital Farm Wallet took out the award in the “Digital Safety, Governance, Privacy, and Cybersecurity” category of the long-running awards program which recognizes disruptive and transformative solutions for end users.

TANZ’s Digital Farm Wallet uses verifiable credentials to create a secure data-sharing ecosystem in which farmers, growers, and other parties in the value chain (such as food producers, processors, retailers and exporters) can easily capture and share data across the sector while keeping it secure, protected and controllable.

The wallet allows farmers to securely hold critical data for their farms, such as their farm ID, greenhouse gas emissions, and farm boundaries, and share it directly and securely with reliant parties. Using the wallet eliminates the need for large third party databases and repetitive and time-consuming manual information sharing and prevents farmers’ data from being shared without their consent and to the detriment of their business. 

In fact, the TANZ Digital Farm Wallet is set to transform New Zealand’s agriculture industry. The project trial proved that directly sharing authenticated information between farmers and distributors in a secure, privacy-preserving way has significant cost benefits. Farmers in the trial spent less time filling out forms and managing data and more time farming. Going forward, the data-sharing ecosystem will also achieve a more streamlined, transparent supply chain, in which farmers can prove their promises about their produce (such as organic status) and consumers can verify the claims for themselves at the time of purchase.

Learn more about verifiable credentials (also called reusable credentials).

TANZ says the Digital Farm Wallet has already won buy-in from important players in the industry, including banks, regional councils, and meat packagers.

The TANZ Digital Farm Wallet project was prompted by the recognition that trusted data sharing is vital to maintaining and growing New Zealand’s primary sector. It is an excellent case study for the agriculture sectors of other countries and for other industries that rely on trusted data sharing. Discover 17 Industries with Viable Use Cases for Decentralized Identity.

Anonyme Labs’ Chief Technology Officer, Dr Paul Ashley, says the company is proud to be leading the way with the increasingly popular decentralized identity technology of verifiable credentials: “This project has shown that replacing legacy paper-based documents and processing with a solution based on digital verifiable credentials increases trust, reduces costs, and opens new avenues for participants to interact in a secure and privacy preserving way.” If you would like to learn more about the TANZ Digital Farm Wallet or discuss your own needs for a digital wallet solution, we’d love to talk to you. Contact us today and head to our web site for a live demo video of what we offer.

The post Anonyome Wins Prestigious SuperNova Award for Digital Wallet that Will Transform Agriculture appeared first on Anonyome Labs.


liminal (was OWI)

Digital Identity News – Week of November 18

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage. Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments. Here are the main industry […] The post Digital Identity News –

Liminal members enjoy the exclusive benefit of receiving daily morning briefs directly in their inboxes, ensuring they stay ahead of the curve with the latest industry developments for a significant competitive advantage.

Looking for product or company-specific news? Log in or sign-up to Link for more detailed news and developments.

Here are the main industry highlights of this week.

🪄Innovation and New Technology Developments

Maryland Launches Mobile ID App for Secure Age Verification and Digital ID Adoption

Maryland has launched the Mobile ID Check by MD app, allowing businesses to verify mobile driver’s licenses (mDLs) and digital IDs for age verification without extra hardware. Compliant with ISO/IEC standards, the app promotes secure data sharing and supports Maryland’s mDL initiatives that started in 2018. Businesses, including those at BWI Airport, are using the app for alcohol age checks. Collaborations with organizations like the TSA and DHS position Maryland as a leader in mDL standards and infrastructure, including the Digital Trust Service for efficient authentication. (Source)

China Launches Biometric Fast Lanes for Seamless Travel in Greater Bay Area

China is testing document-free fast lanes at Shenzhen Bay Port and Gongbei Port for travelers between Mainland China, Hong Kong, and Macao, using facial and fingerprint biometrics for verification. Eligible users include Mainland Chinese residents aged 14 and older with valid multi-entry visas and residents of Hong Kong and Macao with Mainland travel permits. While the system aims to streamline border crossings, travelers must still carry physical documents. This initiative supports the Greater Bay Area strategy to enhance regional connectivity and follows similar biometric advancements in Myanmar, Malaysia, and Indonesia. (Source)

New Zealand Launches Digital Identity Framework for Secure, User-Controlled Data Sharing

The New Zealand government has finalized its Digital Identity Services Trust Framework, promoting secure and privacy-focused digital identity services. It emphasizes user control over personal information, requiring consent for data sharing and ensuring data encryption in accredited digital wallets. The framework offers secure digital options like a digital driving license and bank ID, allowing citizens to choose what information to share. While accreditation for providers is voluntary, the framework aims to enhance confidence in safe and user-friendly digital identity systems. (Source)

Moldova to Launch EU-Compliant Biometric ID Card for Seamless Digital and Cross-Border Services

Moldova has approved a biometric ID card set to launch on March 31, 2025, pending parliamentary approval. It aligns with EU standards and will facilitate access to public services and digital authentication without in-person visits. The card includes a chip with facial and fingerprint data but does not include domicile details. Moldovans abroad will also have remote access to digital services. Citizens aged 14 and above must carry the new ID, supporting Moldova’s integration into the EU’s Digital Identity Wallet framework. (Source)

💰 Investments and Partnerships

Silverfort Acquires Rezonate to Launch Unified Identity Security Platform

Silverfort has acquired Rezonate (acquired by Silverfort) to create a unified identity security platform that integrates Rezonate’s cloud capabilities with Silverfort’s solutions for protecting both human and non-human identities across on-premises and cloud environments. This platform aims to eliminate identity security silos, providing comprehensive visibility and real-time controls, while addressing identity security aspects like ITDR, ISPM, and entitlement management. Fueled by a $116M Series D funding and rapid customer acquisition, Silverfort aims to enhance operational efficiency and strengthen security, protecting all identities and assets, including legacy systems, from a single platform. (Source)

Hopae Secures $6.5M to Expand Blockchain-Based Digital Identity Solutions in U.S. and Europe

Hopae, a digital identity company, has secured $6.5 million to expand into the U.S. and Europe, focusing on compliance with international regulations like the EU’s eIDAS 2.0. Utilizing its patented Digital Credential eXpress (DCX) architecture, the company leverages blockchain for secure and scalable digital identity solutions. Founded by developers of Korea’s national DID system, Pae aims to enhance government and private sector verification by streamlining digital credential issuance, verification, and revocation. (Source)

Cyera Raises $300M Series D, Valued at $3B, to Advance AI-Driven Data Security Platform

Cyera, a data security platform provider, has raised $300 million in Series D funding, increasing its valuation to $3 billion and total funding to $760 million since its 2021 founding. Led by Accel and Sapphire Ventures, the funding will support growth, including the recent $162 million acquisition of Trail Security to enhance its Data Security Posture Management capabilities. (Source)

Nuvei Goes Private in $3 Billion Acquisition Led by Advent International to Drive Global Growth

Nuvei has been acquired by Neon Maple Purchaser Inc., formed by Advent International, for $34.00 per share. This acquisition makes Nuvei a private company, with ownership stakes of approximately 46% by Advent, 24% by Philip Fayer, 18% by Novacap, and 12% by CDPQ. Philip Fayer, Nuvei’s Founder and CEO, remains a major shareholder after rolling over 95% of his shares. Following the acquisition, Nuvei’s shares will be delisted from the Toronto Stock Exchange and Nasdaq, and the company will deregister its shares under U.S. and Canadian securities laws, focusing on global growth through investments and acquisitions. (Source)

Bitsight Acquires Cybersixgill for $115M to Bolster Dark Web Threat Intelligence and Cyber Risk Management

Bitsight, a cybersecurity firm specializing in cyber risk management, has acquired Cybersixgill, a dark web threat intelligence provider, for $115 million. This acquisition aims to enhance Bitsight’s capabilities in assessing cyber risks by leveraging Cybersixgill’s expertise in analyzing dark web activities. Based in Israel, Cybersixgill monitors invite-only messaging groups and other platforms for emerging threats. Bitsight plans to integrate Cybersixgill’s technology and team to further develop its AI-driven cybersecurity products. (Source)

Thales Targets €25B Revenue by 2028 with Defense and Cybersecurity Growth Amid Market Challenges

Thales, Europe’s largest defense technology company, aims for 5%-7% annual revenue growth, targeting €25 billion by 2028, driven by rising global defense spending and cybersecurity demand. CEO Patrice Caine emphasized opportunities from increased defense investments and premium services for critical sectors, supported by acquisitions like Gemalto and Imperva. The company seeks to raise operating margins to 13%-14% by 2028, focusing on high-margin products. While analysts anticipated stronger growth, Thales prioritizes long-term demand for technologies like fighter radars over short-term spikes. Challenges in Europe’s satellite market persist, but talks with Airbus and Leonardo may offer solutions. (Source)

UltraPass Partners with Philippine DOTr to Pilot Biometric Identity Verification in Airports

Ultrapass Identity Corp, a US digital identity company, is partnering with the Philippine Department of Transportation (DOTr) to pilot a biometric identity verification solution at airports. This initiative aims to enhance security, streamline passenger processing, and improve the travel experience. The program aligns with international aviation standards, reduces wait times, and ensures data privacy. Originating from a US-Philippines trade mission, this collaboration supports regional connectivity and highlights the role of US technology in ASEAN’s digital transformation while respecting national data sovereignty. (Source)

⚖️ Policy and Regulatory

DOJ Pushes for Google Chrome Sale in Landmark Antitrust Battle

U.S. Department of Justice Office of the Inspector General (DOJ) is pushing for a federal judge to require the sale of Google’s Chrome browser as part of its antitrust case against the company. The DOJ argues that Google’s deals with phone makers like Apple to secure default search positions and the integration of Chrome with its search engine maintain its illegal monopoly in online search, which constitutes 90% of global searches. The agency also wants court oversight of Google’s Android division or its divestiture, and for websites to have the option to opt out of contributing data to Google’s AI models. Google argues that these proposals would hinder innovation and harm consumers. A court decision is expected in 2025 after a hearing in April. (Source)

Finastra Investigates Data Breach of Secure File Transfer Platform, Impacting Global Banks

Finastra, a London-based financial software provider for global banks, is investigating a data breach involving its Secure File Transfer Platform (SFTP). The breach, disclosed on November 8, resulted in the exfiltration of data, with a hacker claiming to have obtained 400GB of client files and documents, reportedly from IBM Aspera. Finastra has not confirmed the number of affected customers or the specific data types, and initial investigations suggest the breach may stem from compromised credentials. The company is working to identify impacted customers and assess the breach’s extent and cause. (Source)

US Charges Hackers in Global Cybercrime Spree Targeting Tech and Crypto Firms

The U.S. Department of Justice has charged five individuals, including one arrested in Spain, for a multi-year hacking spree targeting tech companies, cryptocurrency platforms, and telecommunications providers. The accused, linked to groups 0ktapus and Scattered Spider, allegedly used phishing, SIM swapping, and fake Okta portals to steal credentials and cryptocurrency worth millions. Victims included organizations in entertainment and virtual currency, with one losing $6.3 million. Prosecutors describe the group as a financially motivated network employing advanced social engineering tactics to breach at least 45 companies worldwide. Additional suspects are yet to be identified. (Source)

New NIST Drafts Aim to Strengthen Federal Identity Verification with Enhanced Security and Interoperability Standards

NIST has released final drafts of SP 800-157 and SP 800-217 to enhance identity verification for federal agencies. SP 800-157 expands the use of Derived PIV Credentials beyond mobile devices, introducing phishing-resistant multi-factor authentication. SP 800-217 outlines requirements for federating PIV credentials between agencies, focusing on protocols and trust agreements. Both documents aim to improve security and standardization in digital identity management. Public feedback is welcomed until January 10, 2025. (Source)

DocuSign Phishing Surge Exploits Trust in Regulatory Communications, Highlighting Gaps in Verification Security

Researchers report a 98% increase in phishing attacks since November 8, primarily targeting businesses through Docusign impersonations. These scams exploit trust by mimicking legitimate agencies like the Department of Health and Human Services, sending urgent, fraudulent requests via authentic-looking DocuSign templates. Victims are pressured to respond quickly, with attacks using accurate terminology to evade security. This situation highlights the need for stronger verification protocols and staff training to reduce risks, as businesses face both immediate financial losses and long-term disruptions. Robust validation processes for sensitive communications are essential. (Source)

Vietnam Enforces Stricter Social Media Rules with Mandatory Identity Verification, Raising Privacy Concerns

The Vietnamese government will implement stricter social media regulations starting December 25, 2024, requiring user identity verification via registered phone numbers. All users, including those on international platforms, must authenticate their identities to post or share content. Parents must register accounts for children under 16 and monitor their online activity. Platforms are obligated to remove illegal content, and companies must comply with user data requests from authorities. Critics voice concerns about privacy, government overreach, and accessibility for those in remote areas with limited telecommunications. (Source)

Roblox Strengthens Parental Controls and Safety Measures Amid Child Protection Scrutiny

Roblox has introduced new parental controls and content safeguards amid concerns about child safety, following allegations that it prioritizes growth over user protection. Parents can now manage daily usage limits, block game genres, and access ratings for games. Users under 13 will be unable to text chat outside of games, in addition to an existing voice chat ban. Roblox claims these features were planned before the Hindenburg report and denies any inflated user metrics. The company has faced criticism from child advocacy groups and regulatory challenges, including a ban in Turkey for alleged child exploitation content. These updates reflect a wider industry trend toward stronger protections for minors. As of September, Roblox reported 88.9 million daily active users, with 40% under 13. (Source)

T-Mobile Targeted in Alleged Chinese Cyberattack on Wiretap Systems Amid Espionage Concerns

T-Mobile has reportedly been targeted in a cyberattack linked to the Chinese state-sponsored hacking group Salt Typhoon, as part of a broader campaign against U.S. and international telecom companies. The focus was on wiretap systems used for government access to customer data. T-Mobile stated that its systems and customer data remain secure, but it did not confirm its ability to fully assess potential breaches. The FBI and CISA have warned of ongoing Chinese cyber espionage efforts targeting sensitive communications. This incident is the ninth known cyberattack on T-Mobile in recent years, following a 2023 breach that exposed the personal information of 37 million customers. (Source)

The post Digital Identity News – Week of November 18 appeared first on Liminal.co.


HYPR

How to Prevent Evilginx Attacks Targeting Entra ID

Attackers continually refine their methods to compromise user identities and gain unauthorized access to sensitive systems. One particularly insidious threat is Evilginx, a phishing framework designed to bypass traditional multi-factor authentication (MFA) by operating as an adversary-in-the-middle (AitM) — sometimes known as man-in-the-middle (MitM) — proxy. Evilginx intercepts and man

Attackers continually refine their methods to compromise user identities and gain unauthorized access to sensitive systems. One particularly insidious threat is Evilginx, a phishing framework designed to bypass traditional multi-factor authentication (MFA) by operating as an adversary-in-the-middle (AitM) — sometimes known as man-in-the-middle (MitM) — proxy. Evilginx intercepts and manipulates communication between users and legitimate sites, enabling attackers to steal credentials, session cookies, and other sensitive data. It’s a favorite tool of threat groups such as the Russian-based Star Blizzard, as warned in a joint advisory from CISA, the UK National Cyber Security Centre, the Australian Cyber Security Centre, and the Canadian Centre for Cyber Security, among other governmental security bodies.

Threat researchers and incident response teams have reported a noticeable surge in phishing campaigns utilizing Evilginx, exploiting MFA’s reliance on session validation. Even with MFA in place, Evilginx captures session cookies after authentication is complete, granting attackers unauthorized access to accounts. In many cases, it can also bypass Windows Hello for Business. This makes it a particularly effective tool for targeting Microsoft Entra ID environments. This article peels back the layers on Evilginx, looking at how it  operates, why it’s effective, and the best defenses to help keep your organization secure.

The Evolution of Evilginx

Originally developed as a pentesting tool to demonstrate the vulnerabilities of traditional MFA, Evilginx has evolved to become a cornerstone of sophisticated phishing campaigns. Using a modified version of the open-source nginx web server software, early versions focused on basic credential harvesting. Newer iterations, however, incorporate advanced features like session cookie interception and real-time proxying to bypass MFA entirely. Now named Evilginx 3 and written in Go, the framework is stable, adaptable and set up to target platforms like Microsoft Entra ID. It comes with built-in “phishlets” to easily configure identical login experiences for Microsoft 365, Citrix, Okta, and other sites.

Understanding Reverse Proxies

Reverse proxies are a legitimate, widely-used technique where a proxy server handles requests and responses on behalf of the origin server. It sits between an endpoint, such as a user’s desktop, and public facing traffic and websites. Requests from the endpoint are intercepted by the reverse proxy server, which then sends the requests on to the origin server. This helps organizations manage incoming traffic, distribute loads across servers, and strengthen security by shielding the internal server structure. It also allows organizations to cache content that may be commonly used by their users, saving loading time. 

Reverse Proxy Flow

How Evilginx Works

Evilginx leverages the concept of a reverse proxy, but configured specifically to capture a user’s credentials and session cookies once they are tricked into accessing the Evilginx URL instead of the legitimate target server.

The process goes something like this.

1. Phishing lure: The attacker lures the victim into clicking on a phishing link sent by email or SMS, which takes them to the Evilginx-created phishing site:

2. Fraudulent site: The phishing site consists of a fake login page that looks and behaves exactly like the legitimate site, complete with a valid TLS certificate and lock icon. When the user tries to log in, Evilginx forwards the request to the real service:

3. Credential harvesting: The user enters their username and password on the fake page, which Evilginx captures and sends to the genuine site. Evilginx also collects and passes back second factor authentication factors, such as OTPs and out-of-band authentication (eg. push notification to the MS Authenticator app).

4. Session hijacking: If successfully authenticated, the legitimate service will return session credentials (tokens, session cookie), which Evilginx intercepts. The attacker uses the captured credentials and session cookies to directly access the user’s account.

5. Account takeover: Once the attacker has control of the session, they can change the user’s password and other information, locking the victim out.

Handling Federation Redirects

But what if Entra ID is configured to redirect to a different IdP (like ADFS) to perform federated authentication?

The flow is very similar to the simpler flow above, except Entra is going to return a 302 redirect to the downstream IdP. As long as Evilginx is configured to be aware of the redirect host, it will spin up a new “host” under its subdomain, and proxy the redirect to the browser to go there instead:

So even if the user enters their credentials in the downstream IdP, the result is the same. The downstream IdP issues a federation token (eg. SAML or OIDC) which Evilginx uses to get the final session tokens from Entra ID:

How Attackers Use Evilginx

Attackers leveraging Evilginx often start by targeting the weakest link: unprotected personal devices. A common scenario involves a phishing email sent to an employee’s personal email address, which is less likely to be secured by corporate defenses. For example, if you work for Acme.com, you might receive a spear-phishing email that appears relevant to your role or recent activities. Once you click the link, expecting to authenticate, the attacker’s Evilginx server intercepts the login process as described above, capturing credentials and session cookies, and eventually locking you out of your account entirely.

Generative AI has made these attacks easier and far more effective. By mining public data about employees — such as their social media profiles, published work, and LinkedIn connections — attackers can craft highly convincing, customized phishing campaigns (aka “spear phishing”) within minutes. Moreover, Evilginx makes it easy to set up the phishing site, providing exact replicas of login pages for Microsoft Entra ID, Okta and other popular services.

Inside an Evilginx Attack on Entra ID

In this attack demo, you can see how easy it is to hack into and take over an Entra ID account using Evilginx, even with “stronger” MFA with number matching turned on.

 

 

Defending Against Evilginx

Protecting against Evilginx attacks starts with basic, foundational defenses like two-factor authentication (2FA). While not immune to compromise — attackers can still steal session cookies — 2FA adds a layer of difficulty that may deter some threats. Another critical measure is network traffic inspection, particularly for enterprises. Monitoring where traffic is directed can help identify phishing URLs and flag malicious activity, though detection often occurs after users have already clicked on links.

Employee phishing awareness training can also reduce the risk of falling for phishing attempts, although it’s unrealistic to expect perfect vigilance. Mistakes are inevitable, especially as attackers craft increasingly targeted, convincing lures.

The most effective strategy lies in adopting FIDO passkeys for authentication. Passkeys use domain binding, which ensures that authentication attempts will only succeed if the domain matches the one the passkey was registered with. This effectively renders reverse proxy tools like Evilginx useless, as they cannot impersonate the bound domain.

What About Windows Hello for Business?

Although Windows Hello for Business (WHfB) is a FIDO2 compliant authenticator, the way it is usually configured makes it vulnerable to Evilginx attacks. Most organizations set up WHfB to be the primary authentication method, with a more insecure fallback option, such as password plus an SMS OTP or Microsoft Authenticator. To make this worse, there are Evilginx phishlets available that specifically bypass WHfB  authentication (in case it was used last time by the user) by forcibly downgrading the flow to use the more vulnerable fallback methods. 

The key is to enable policies in Conditional Access that don’t allow a less-secure (non-phishing-resistant) fallback option. If you do, the attackers will exploit it, making it pointless for deploying WHfB in the first place.

How HYPR Thwarts Evilginx Attacks

HYPR is designed to outsmart the most sophisticated AitM tactics, including Evilginx attacks. HYPR Enterprise Passkeys leverage FIDO passkey standards, binding the domain to the key so that only login attempts on the correct domain can succeed. This effectively shuts down reverse proxy tools that rely on intercepting session cookies or credentials. HYPR only uses phishing-resistant, FIDO Certified passwordless MFA methods — it never falls back to a shared secret that can be phished or intercepted. It can be used as the primary authentication method or a phishing-resistant fallback for Windows Hello for Business.

See what this protection looks like during the same phishing attack demonstrated above.

 

 

More Layers of Identity Protection

On top of our leading passwordless architecture, our identity risk engine, HYPR Adapt, adds another layer of security by detecting and responding to risk signals — even if the correct credentials are used. Account recovery is another area frequently exploited by attackers. They employ social engineering to impersonate a legitimate user and convince the help desk to provision new credentials. HYPR’s identity verification solution prevents this by ensuring someone is the rightful account owner before allowing credentials to be issued. 

Read more about HYPR’s continuous, end-to-end identity assurance for your Microsoft Entra ID and hybrid environments or arrange a custom demo to see it in action.

 


Thales Group

Press release

Press release prezly Fri, 11/22/2024 - 14:45 Thales confirms that the Parquet National Financier (PNF) in France and the Serious Fraud Office (SFO) in the United Kingdom have initiated an investigation in relation to four Thales entities located in France and the UK, regarding the performance of a contract in Asia. Thales denies the allegations brought to its knowledge. The Gr
Press release prezly Fri, 11/22/2024 - 14:45

Thales confirms that the Parquet National Financier (PNF) in France and the Serious Fraud Office (SFO) in the United Kingdom have initiated an investigation in relation to four Thales entities located in France and the UK, regarding the performance of a contract in Asia.

Thales denies the allegations brought to its knowledge.

The Group is fully cooperating with the PNF in France and the SFO in the UK.

Thales complies with all national and international regulations.

/sites/default/files/prezly/images/sans%20A-1920x480px_54.jpg Documents [Prezly] 22112024_Thales press release.pdf Contacts Cédric Leurquin 22 Nov 2024 Type Press release Structure Investors Group Thales confirms that the Parquet National Financier (PNF) in France and the Serious Fraud Office (SFO) in the United Kingdom have initiated an investigation in relation to four Thales entities located in France and the UK, regarding the performance of a contract in Asia. prezly_707670_thumbnail.jpg Hide from search engines Off Prezly ID 707670 Prezly UUID 481953db-dbae-41b9-8013-bbdd172d6cfc Prezly url https://thales-group.prezly.com/press-release Fri, 11/22/2024 - 15:45 Don’t overwrite with Prezly data Off

KuppingerCole

Oracle Access Governance

by Nitish Deshpande This KuppingerCole Executive View report looks at the current state and emerging trends of Access Governance. A technical review of the Oracle Access Governance is included.

by Nitish Deshpande

This KuppingerCole Executive View report looks at the current state and emerging trends of Access Governance. A technical review of the Oracle Access Governance is included.

Thales Group

COOPANS, the Alliance Managing Europe’s Largest Air Traffic Volume, Upgrades its Air Traffic Control (ATC) System with Thales

COOPANS, the Alliance Managing Europe’s Largest Air Traffic Volume, Upgrades its Air Traffic Control (ATC) System with Thales prezly Fri, 11/22/2024 - 10:30 COOPANS is a leading international cooperation between six European air navigation service providers in Austria (Austro Control), Croatia (Croatia Control), Denmark (Naviair), Ireland (AirNav Ireland), Portugal (NAV Portugal)
COOPANS, the Alliance Managing Europe’s Largest Air Traffic Volume, Upgrades its Air Traffic Control (ATC) System with Thales prezly Fri, 11/22/2024 - 10:30 COOPANS is a leading international cooperation between six European air navigation service providers in Austria (Austro Control), Croatia (Croatia Control), Denmark (Naviair), Ireland (AirNav Ireland), Portugal (NAV Portugal) and Sweden (LFV), which manages Europe’s largest traffic volume (14% of European air traffic).
​ COOPANS will modernise its common Air Traffic Control (ATC) system with Thales’s TopSky-ATC One, a product solution and its associated governance model in line with the European ATM Master Plan and the Digital European Sky initiatives to harmonize and improve air traffic management in Europe by using digital technologies.
Christian Rivierre, VP Airspace Mobility Solutions at Thales ringing in a new era in Air Traffic Management with COOPANS ANSPs' CEOs Philipp Piber, Austro Control, Mario Kunovec-Varga, Croatia Control, Dr. Peter Kearney, Airnav Ireland, Ann Persson Grivas (represented), LFV, Anders Rex, Naviair and Pedro Angelo, NAV Portugal

On November 19, 2024, the CEOs of the six COOPANS-Air Navigation Service Providers (ANSPs) COOPANS Alliance Board Chair Philipp Piber, Austro Control, Mario Kunovec-Varga, Croatia Control, Dr. Peter Kearney, Airnav Ireland, Ann Persson Grivas (represented), LFV, Anders Rex, Naviair and Pedro Angelo, NAV Portugal , came together at Thales Headquarters at Rungis in France to officially launch the next phase of ATM System modernisation in Europe, paving the way for the implementation of the system upgrade TopSky-ATC One.

The TopSky - ATC upgrade will enable COOPANS to improve air traffic management in Europe through advanced features that will lead to a better and faster coordination, and more precise decision-making for air traffic controllers, allowing them to handle more flights. The enhanced user interface will provide clearer visualization of essential information and optimized ergonomics to streamline operations.

The upgraded system will strengthen the integration of radar data and flight plans, enabling better surveillance and greater accuracy in detecting aircraft trajectories. The advanced real-time decision support capabilities will facilitate the management of complex situations, thereby improving the safety and productivity of air operations.

With this upgrade COOPANS is also decisively moving forward in line with the ambitious Single European Sky (SES) project of the European Union, which aims to harmonize and optimize air traffic management across Europe, creating a single and efficient airspace.

"The upgrade of our system to the state-of-the-art TopSky-ATC solution is a significant milestone in order to be able to manage increasing air traffic in a safe, efficient and sustainable way in the future. A modern, open, and modular ATM system will ensure flexibility, adaptability, and interoperability, enabling seamless integration of new technological advancements. The commitment and professionalism of all our staff, who worked so hard on the specifications for the system modernisation, played a key role in ensuring that we could continue with this highly successful cooperation" said COOPANS Alliance Board Chairman Philipp Piber.

"We are pleased that COOPANS has selected Thales’ TopSky - ATC One offering, including its system upgrade and innovative governance model. This decision ensures that COOPANS will benefit from the most advanced air traffic control (ATC) technology available, as Thales continues to lead the evolution of the industry. By partnering with Thales, COOPANS will enhance interoperability, efficiency, and safety in air traffic management. Together, we are shaping the future of airspace, advancing toward a more integrated and sustainable system in line with the ambitious objectives of the Digital European Sky project." said Christian Rivierre, VP Airspace Mobility Solutions, Thales.

Thanks to a new governance model, Thales ensures that the priorities of its Air Navigation Service Provider customers are met, enabling the continuous evolution of TopSky – ATC solution to consistently address industry demands while improving the safety, efficiency, and sustainability of air operations across Europe.

The system upgrade is currently in the development phase. Until 2028, hardware deployment, software integration, and intensive validations will take place in parallel, along with operational and technical training. The implementation of the upgraded system in COOPANS is planned in three waves from 2028 to 2030.

/sites/default/files/prezly/images/Design%20sans%20titre%20%2830%29.png Documents [Prezly] COOPANS, the Alliance Managing Europe’s Largest Air Traffic Volume, Upgrades its Air Traffic Control (ATC) System with Thales.pdf Contacts Cédric Leurquin 22 Nov 2024 Type Press release Structure Aerospace Austria Denmark Hungary Sweden On November 19, 2024, the CEOs of the six COOPANS-Air Navigation Service Providers (ANSPs) COOPANS Alliance Board Chair Philipp Piber, Austro Control, Mario Kunovec-Varga, Croatia Control, Dr. Peter Kearney, Airnav Ireland, Ann Persson Grivas (represented), LFV, Anders Rex, Naviair and Pedro Angelo, NAV Portugal , came together at Thales Headquarters at Rungis in France to officially launch the next phase of ATM System modernisation in Europe, paving the way for the implementation of the system upgrade TopSky-ATC One. prezly_706781_thumbnail.jpg Hide from search engines Off Prezly ID 706781 Prezly UUID f4522aa4-36fa-45f1-8acc-b7896ea35581 Prezly url https://thales-group.prezly.com/coopans-the-alliance-managing-europes-largest-air-traffic-volume-upgrades-its-air-traffic-control-atc-system-with-thales Fri, 11/22/2024 - 11:30 Don’t overwrite with Prezly data Off

auth0

What's New in the Auth0 Terraform Provider?

The Auth0 Terraform provider has many new features and updates. Learn all about what's new.
The Auth0 Terraform provider has many new features and updates. Learn all about what's new.

PingTalk

How to Use CIAM to Elevate the Customer Experience

The right customer identity and access management (CIAM) strategy can help you steer a safe course to a secure, seamless customer experience.

Digital channels, including websites, mobile apps, and social media, have become the primary touchpoint for establishing new customer relationships. In fact, 91% of adults ages 18 to 49 have purchased products or services online using a smartphone, according to Consumer Affairs.1 It’s crucial to make a good first impression during these customer interactions if you hope to build a loyal customer base.

 

This shifting dynamic creates new challenges and opportunities for businesses looking to attract and retain customers. The right customer identity and access management (CIAM) strategy can help you provide a positive customer experience.

Thursday, 21. November 2024

KuppingerCole

Passkeys in a Zero Trust World – Blessing or Curse?

In the modern digital landscape, organizations are confronted with growing cybersecurity challenges that demand stronger authentication methods. Zero Trust frameworks have become essential for bolstering security postures, placing a significant emphasis on identity verification. As traditional passwords become more vulnerable, passkeys are gaining traction for their phishing-resistant capabilities

In the modern digital landscape, organizations are confronted with growing cybersecurity challenges that demand stronger authentication methods. Zero Trust frameworks have become essential for bolstering security postures, placing a significant emphasis on identity verification. As traditional passwords become more vulnerable, passkeys are gaining traction for their phishing-resistant capabilities and their potential to transform authentication within Zero Trust environments.

Implementing passkeys, however, is not without its hurdles. Organizations must navigate evolving software ecosystems, inconsistent user experiences, and complex recovery processes. Balancing security requirements with user convenience remains a key challenge. This webinar will explore the various types of passkeys, their benefits, and the trade-offs between security and usability in achieving passwordless authentication.

Alejandro Leal, Research Analyst at KuppingerCole, will discuss the shift towards identity-centric security measures and the role of passkeys in building resilient digital environments. He will provide insights into how passkeys contribute to minimizing attack surfaces and enhancing overall security postures within Zero Trust frameworks.

Andre Priebe, Chief Technology Officer at iC Consult, will offer a high-level explanation of Zero Trust benefits from passkey adoption. He will explain passkey technologies, current implementation challenges, and best practices for adopting and scaling passkeys within organizations, focusing on improving security without compromising user experience.




SC Media - Identity and Access

North Korean IT worker scam linked to Chinese front companies

SentinelLabs reveals information on four previously unreported Chinese front companies taken down by the U.S. government Oct. 10.

SentinelLabs reveals information on four previously unreported Chinese front companies taken down by the U.S. government Oct. 10.


Terms & Acronyms pt.2 - SWN Vault


Dock

Dock is partnering with Daon to streamline ID verification

We’re thrilled to announce that Dock is partnering with Daon to streamline ID verification and more! Since 2000, Daon has been at the forefront of digital identity assurance technology. From its early days in Ireland, Daon has grown to become a trusted global partner

We’re thrilled to announce that Dock is partnering with Daon to streamline ID verification and more!

Since 2000, Daon has been at the forefront of digital identity assurance technology. From its early days in Ireland, Daon has grown to become a trusted global partner in both the public sector and for some of the world’s most iconic brands, securing over 2 billion identities on 6 continents.

Through this partnership, Daon will leverage Dock’s Decentralized ID technology to unify verified data from sources collected during the digital identity verification process—government-issued digital IDs, bank-issued IDs, and more—into a single, proven verifiable credential. This reusable credential will accelerate further ID verification processes and provide simple, secure account access across businesses and siloed systems within a closed ecosystem, creating a seamless experience for users. 

Additionally, Daon will explore Dock's biometric-bound credential capabilities, tying biometrics, secured by advanced liveness detection, directly to the credentials to ensure they can only be used by the intended individual. 

Together, we’ll explore the future of mobile driver’s licenses (mDL) and eIDAS verification, unlocking the potential of decentralized ID for both Europe and America.

We’re excited to be working together on the future of digital identity verification!


Elliptic

Key insights from Elliptic's Global Crypto Regulation Landscape: 2024 Review

2024 has proved to be a year of fast-moving regulatory and policy change impacting the cryptoasset space. Across the globe, regulators have been working to address the opportunities and challenges presented by cryptoassets, creating new regulatory frameworks or updating existing ones in the process.

2024 has proved to be a year of fast-moving regulatory and policy change impacting the cryptoasset space. Across the globe, regulators have been working to address the opportunities and challenges presented by cryptoassets, creating new regulatory frameworks or updating existing ones in the process.


Thales Group

Thales unveils generative AI solution for Security Operations Centres (SOCs)

Thales unveils generative AI solution for Security Operations Centres (SOCs) prezly Thu, 11/21/2024 - 09:00 Thales's AI accelerator cortAIx has unveiled GenAI4SOC, the first solution of its kind developed in France, to detect cybersecurity incidents on enterprise information systems by combining the use of generative AI with the subject-matter expertise of operators of critical ci
Thales unveils generative AI solution for Security Operations Centres (SOCs) prezly Thu, 11/21/2024 - 09:00 Thales's AI accelerator cortAIx has unveiled GenAI4SOC, the first solution of its kind developed in France, to detect cybersecurity incidents on enterprise information systems by combining the use of generative AI with the subject-matter expertise of operators of critical civil and military information systems. As cyberthreats become more sophisticated, more frequent and more damaging, partly because of the increasing use of AI by cyberattackers, AI technologies can also help analysts to implement the most effective threat detection strategies. AI-augmented threat detection enables analysts to anticipate cyberthreats earlier and expand the range of systems under supervision.

At European Cyber Week in Rennes, Brittany, from 19-21 November 2024, Thales is presenting GenAI4SOC, a dedicated solution for Security Operations Centres (SOCs) that uses generative AI technologies for faster detection of cybersecurity incidents on enterprise information systems.

GenAI4SOC is designed to speed up threat detection and adapt automatically to the rapidly developing cyberthreat landscape. It enables SOCs to react faster to zero-day threats and vulnerabilities so that new detection rules can be built, deployed and scaled more quickly.

To provide effective cyber-attack detection and response solutions, Thales relies on Security Operations Centres (SOCs) to ensure 24/7 supervision of IT infrastructure, detect cybersecurity incidents and trigger coordinated responses and remediation planning.

Located all over the world, Thales's SOCs detect and analyse threats in real time, support response planning and verify that the IT infrastructure is in compliance with applicable security policies and regulations.

GenAI4SOC assists analysts by proposing detailed, verifiable responses:

Natural-language explanations based on the tool's understanding of the threat as well as Cyber Threat Intelligence sources Strategy proposals for improved threat detection Creation of new detection rules for operators via a chat function trained using Reinforcement Learning with Human Feedback (RLHF)

Thales's new generative AI solution for SOCs draws on an extensive sovereign cybersecurity knowledge base including a range of Thales sources (Cyber Threat Intelligence, detection rule libraries, vulnerability monitoring) focused on critical civil operations (finance, insurance, automotive, manufacturing, energy, etc.) as well as defence, aerospace and space activities.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialising in three business domains: Defence & Security, Aeronautics & Space and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/Generic%20banner%20option%205%20%281%29_8.png Documents [Prezly] Thales unveils generative AI solution for Security Operations Centres (SOCs).pdf Contacts Cédric Leurquin 21 Nov 2024 Type Press release At European Cyber Week in Rennes, Brittany, from 19-21 November 2024, Thales is presenting GenAI4SOC, a dedicated solution for Security Operations Centres (SOCs) that uses generative AI technologies for faster detection of cybersecurity incidents on enterprise information systems. prezly_705711_thumbnail.jpg Hide from search engines Off Prezly ID 706970 Prezly UUID 1865e61e-718a-415f-9165-f052a83bdd38 Prezly url https://thales-group.prezly.com/thales-unveils-generative-ai-solution-for-security-operations-centres-socs Thu, 11/21/2024 - 10:00 Don’t overwrite with Prezly data Off

Ocean Protocol

DF116 Completes and DF117 Launches

Predictoor DF116 rewards available. DF117 runs Nov 21 — Nov 28th, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 116 (DF116) has completed. DF117 is live today, Nov 21. It concludes on November 28th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&n
Predictoor DF116 rewards available. DF117 runs Nov 21 — Nov 28th, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 116 (DF116) has completed.

DF117 is live today, Nov 21. It concludes on November 28th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF117 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF117

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF116 Completes and DF117 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 20. November 2024

KuppingerCole

Transforming SOCs: The Power of SOAR Solutions

Cyberattacks are becoming increasingly sophisticated, requiring innovative approaches to cybersecurity. This webinar will explore how Security Orchestration, Automation, and Response (SOAR) platforms can revolutionize incident response by providing security teams with advanced threat detection and mitigation tools. We'll discuss the challenges of traditional SIEM systems and the transformative pot

Cyberattacks are becoming increasingly sophisticated, requiring innovative approaches to cybersecurity. This webinar will explore how Security Orchestration, Automation, and Response (SOAR) platforms can revolutionize incident response by providing security teams with advanced threat detection and mitigation tools. We'll discuss the challenges of traditional SIEM systems and the transformative potential of integrating generative AI into SOAR solutions. 

Join this webinar to learn:

The evolution of cyber threats and the need for advanced responses. How SOAR platforms streamline incident response and enhance SOC efficiency. The integration of generative AI to automate and improve security operations. Key considerations for selecting the right SOAR solution for your organization. Best practices for leveraging SOAR to mitigate risks in today’s dynamic threat landscape.


Redefining Cybersecurity: Facing the Next Generation of Threats

by Nitish Deshpande As technology continues to evolve at a breakneck pace, so do the risks lurking in the shadows of digital transformation. The cybersecurity landscape is no longer just about firewalls and antivirus software—it’s a high-stakes chess game with increasingly cunning adversaries. To stay ahead, we must not only understand these next-generation threats but also embrace smarter, more

by Nitish Deshpande

As technology continues to evolve at a breakneck pace, so do the risks lurking in the shadows of digital transformation. The cybersecurity landscape is no longer just about firewalls and antivirus software—it’s a high-stakes chess game with increasingly cunning adversaries. To stay ahead, we must not only understand these next-generation threats but also embrace smarter, more adaptive strategies.

The New Face of Cyber Threats

AI-Driven Cyberattacks: Smarter, Faster, and More Dangerous

Artificial intelligence (AI) is no longer just a tool for innovation—it’s becoming a weapon in the hands of cybercriminals. AI automates attacks, identifies vulnerabilities at lightning speed, and creates malware that learns to outsmart detection systems. Defenders can’t afford to play catch-up. Staying proactive with AI-powered countermeasures is the only way to level the playing field.

Deepfakes: Trust Eroded in the Digital Age

What if you could no longer trust what you see or hear online? Deepfake technology has weaponized deception, enabling attackers to impersonate executives, manipulate public opinion, and execute devastating social engineering attacks. The fallout includes financial fraud, reputation damage, and a society questioning the authenticity of all digital content.

IoT: Billions of Devices, Billions of Risks

The Internet of Things (IoT) has revolutionized our lives, connecting everything from home appliances to critical infrastructure. But this connectivity comes at a price. Many IoT devices are built with weak—or nonexistent—security measures, making them easy targets. Worse, these devices can serve as backdoors to larger networks, amplifying the threat.

Ransomware: Evolving and Escalating

Ransomware has shifted gears, becoming more calculated and destructive. Attackers now employ double extortion tactics—encrypting data while also threatening to leak it unless hefty ransoms are paid. This evolution demands more robust defenses, including smarter recovery plans and airtight data backups.

Strategies to Outpace the Threats

Leveraging AI to Fight AI

If attackers are arming themselves with AI, so must defenders. AI-powered security tools can sift through massive data sets, detect anomalies, and react to threats faster than any human team ever could. Machine learning ensures systems grow smarter with every attack, turning past threats into future advantages.

Zero Trust: A Security Framework Built for Complexity

Zero Trust flips the traditional security model on its head: trust no one and verify everyone. By constantly authenticating users and devices, this model minimizes risks like unauthorized access and lateral movement. Yet, it’s no magic wand—Zero Trust is complex to implement at scale, which explains its slow adoption rate.

Behavioral Analytics: The Power of Predictive Insight

In cybersecurity, behavior often reveals intent. Analyzing user actions—like irregular login patterns or unusual data transfers—can flag potential threats before they escalate. Behavioral analytics provides the critical early warnings organizations need to respond swiftly and decisively.

Setting the Bar Higher for IoT Security

The IoT ecosystem needs more than innovation; it needs standards. Industry collaboration is essential to enforce stricter measures like device authentication, end-to-end encryption, and regular firmware updates. Securing billions of connected devices isn’t just desirable—it’s non-negotiable.

Ransomware Resilience: A Comprehensive Playbook

To stay resilient against ransomware, preparation is key. Think regular backups, segmented networks, and well-trained employees. Add a strong incident response plan, and you’ve got the foundation for bouncing back without caving to criminal demands.

Building Resilience for an Unpredictable Future

The battle between cybersecurity professionals and threat actors isn’t going anywhere—it’s only intensifying. Agility, collaboration, and innovation are non-negotiable in this fight. By adopting advanced technologies and forward-thinking strategies, we can fortify our systems and stay one step ahead of the adversaries shaping tomorrow’s threats.

Join us in December in Frankfurt at our cyberevolution conference, where we will continue to discuss the cyber threat landscape and its future.


Anonym

The Top 10 Ways Bad Actors Use Your Stolen Personal Information

A bad actor isn’t only a poorly skilled thespian (ha ha). It’s also a person (or group) who intentionally acts to cause harm to a person or organization via computers, devices, systems or networks. This type of bad actor most commonly affects individuals when they steal their personal information, such as name, address and credit card details […] The post The Top 10 Ways Bad Actors Use

A bad actor isn’t only a poorly skilled thespian (ha ha). It’s also a person (or group) who intentionally acts to cause harm to a person or organization via computers, devices, systems or networks.

This type of bad actor most commonly affects individuals when they steal their personal information, such as name, address and credit card details in a data breach (though not all data breaches are malicious).

To date, more than 60% of Americans have been the victim of a data breach. In 2023, data breaches in the US increased by a massive 78% over 2022, and impacted an estimated 353,027,892 people.

Breaches don’t discriminate by industry. In fact, no industry is safe, with public administration, finance and healthcare the most at-risk industries for data compromise.

What’s more, data breaches don’t only happen on the internet. Personal information can be exposed via Bluetooth, text message, and the good old-fashioned stolen wallet or phone, too.

So, once a bad actor has your personal information, what do they do with it? Here are the top 10 things going on right now:

Identity theft: Using your stolen information to impersonate you for financial gain or to commit crimes Financial fraud: Accessing your bank accounts, credit card information, or other financial accounts to make unauthorized transactions Phishing: Sending fraudulent emails or messages pretending to be from legitimate organizations to trick you into revealing more information or clicking on malicious links Social engineering: Manipulating you into divulging confidential information, often by posing as someone you trust or using your stolen information to build credibility Account takeover: Gaining unauthorized access to your online accounts (email, social media, etc.) using your stolen usernames and passwords Tax fraud: Using stolen personal information to file fraudulent tax returns and claim refunds Medical identity theft: Using your stolen information to get medical services and prescriptions, or to fraudulently file insurance claims Employment fraud: Using your stolen information to illegally gain employment or benefits Blackmail or extortion: Threatening to expose your sensitive information unless you pay a ransom Creating fake identities: Using your stolen information to create new identities for various fraudulent purposes.

So, with so many ways to be scammed, how do you keep your information safe online and off? The simplest fix is to use MySudo.

MySudo is the world’s only all-in-one privacy app that lets you protect your information, secure your chat, and organize your life.

You protect your information with secure digital identities called Sudos, each with its own phone, email, handle, private browser and payment card. Anywhere you’d normally use your personal phone number, email or payment card, use your Sudos instead. Sign up for deals and discounts, book rental cars and hotel rooms, pay for concerts or a coffee, all without giving away your personal information.

Then, you secure your chat by calling, texting and emailing securely inside the app with other users using your Sudo handle—or communicate standard outside the app with everyone else. Your Sudo phone and email work just like your private ones and they protect you from spam and scams.

You can also use MySudo to organize your life. Shop through a Sudo, sell through a Sudo, eat through a Sudo and live through a Sudo. The power of Sudos lies in compartmentalization, separating your information into different silos or Sudos to reduce the impact if a data breach strikes and helping you keep all your activity contained within a dedicated Sudo purpose.

Once you know how many Sudos you need, choose one of our awesome value plans for a privacy set-up that’s right for you. Check out MySudo.

Want even more tools to protect your personal information? Try RECLAIM.

RECLAIM, powered by MySudo, is a new personal data removal service that uses machine learning and artificial intelligence to help you reclaim control of your personal information from the companies that store and sell it.  

RECLAIM scans your email subject lines and senders to identify which companies have your personal details, such as phone, address, and credit card details, and then instructs you in either switching out your personal information for Sudo information or asking the company to delete your data altogether. 

Remember, Sudos are secure digital profiles with phoneemail, and payment cards to use instead of your own. You create your Sudos in the MySudo all-in-one privacy app, part of the same app family as RECLAIM. 

Just released in beta, RECLAIM is a great place to start reducing the online exposure of your personally identifiable information and digital footprint, and boosting your data privacy. Check out RECLAIM. 

Here are more great tips for what to do if you’re caught in a data breach.

And why not download MySudo VPN to encrypt your internet connections while you’re at it? You’re on a roll!

Want even more? Check out our blog and popular podcast.

The post The Top 10 Ways Bad Actors Use Your Stolen Personal Information appeared first on Anonyome Labs.


Thales Group

Thales VesseLINK: Powering Connectivity and Safety on the High Seas

Thales VesseLINK: Powering Connectivity and Safety on the High Seas Language English dominique.gillyard Wed, 11/20/2024 - 21:23 The 2024 Vendée Globe, which started earlier this month and will continue through March of 2025, is one of the most extreme sporting events in the world. It pits solo sailors against the vast, unpredictable forces of nature as th
Thales VesseLINK: Powering Connectivity and Safety on the High Seas Language English dominique.gillyard Wed, 11/20/2024 - 21:23

The 2024 Vendée Globe, which started earlier this month and will continue through March of 2025, is one of the most extreme sporting events in the world. It pits solo sailors against the vast, unpredictable forces of nature as they travel around the globe – racing against time and nature as they navigate treacherous waters, face extreme weather conditions, and rely on their boats and personal resilience to survive. 

For those sailing with Thales VesseLINK – powered by the Iridium Certus satellite network – onboard, one thing they don’t have to worry about is staying connected. This advanced communications solution enables enhanced connectivity and safety so that these skippers can focus squarely on facing the journey ahead.  

The Challenge of Solo, Non-Stop Circumnavigation

The Vendée Globe is famously known as "the Everest of the Seas." Competitors embark on a non-stop journey around the world without any external assistance, covering a theoretical distance of 45,000 kilometers (24,300 miles) – representing the theoretical distance you would have to travel to sail around the world. 

Starting in Les Sables d’Olonne, France, the skippers head down the Atlantic Ocean, across the Indian and Pacific Oceans, and back up the Atlantic to complete the grueling three-month race. This journey presents not only physical and mental challenges for the skippers but also logistical challenges in terms of navigation, communication, and safety.

Ship-to-shore Connectivity from Anywhere 

One of the biggest challenges with maintaining reliable communications at sea is enabling coverage over vast stretches of ocean. Enter Thales VesseLINK, a maritime communications solution that uses the Iridium Certus satellite network to provide seamless connectivity. The technology ensures that skippers racing with the solution have access to continuous data, voice, and video communications even in the most remote parts of the world. This enables several critical tasks at sea, including downloading weather data, delivering content to the media, staying in touch with friends, family, race team, and more.

Enhancing Safety Through Constant Communication

While the Vendée Globe is a solo journey, safety is always a top priority. VesseLINK provides crucial communication links that allow skippers to stay in touch with race organizers and emergency response teams. In an event of distress, VesseLINK enables immediate, reliable communication, allowing quick coordination for rescue efforts.

The Iridium Certus satellite network that powers VesseLINK is known for its global reach, providing a lifeline in remote ocean regions where traditional communication networks are nonexistent. VesseLINK’s data-sharing capabilities allow skippers to share their location, weather conditions, and onboard diagnostics with race organizers and other stakeholders, providing an extra layer of security and allowing organizers to monitor a boat’s position in real-time.

Moreover, with the real-time situational awareness provided by VesseLINK, skippers can receive timely weather updates, helping them make informed decisions when navigating severe weather systems. This technology, combined with the skippers' expertise, helps minimize risks during the race and ensures that they have the support needed to safely complete their journey.

As the world watches this year’s race, this powerful technology reminds us that while these sailors may be isolated, having connectivity they can trust means they’re never truly alone.

/sites/default/files/database/assets/images/2024-11/vendee%2024%202%20copy.png 20 Nov 2024 United States United States For those in the 2024 Vendée Globe sailing with Thales VesseLINK – powered by the Iridium Certus satellite network – onboard, one thing they don’t have to worry about is staying connected. This advanced communications solution enables enhanced connectivity and safety so that these skippers can focus squarely on facing the journey ahead. Type News Hide from search engines Off

auth0

Native Login with Passkeys Is Now in Limited Early Access for Android Applications!

Native login with passkeys allows you to integrate passkeys into your native applications and offers a smooth user experience with all the benefits of passkeys.
Native login with passkeys allows you to integrate passkeys into your native applications and offers a smooth user experience with all the benefits of passkeys.

Spherical Cow Consulting

Rethinking Identity Management: The Role of Non-Human Identities in Academic Research

Academia is facing challenges in managing non-human identities (NHIs), which are essential for modern research systems but often treated like human users. As NHIs grow in complexity, issues like token sprawl, access management misalignments, and compliance difficulties arise, especially in collaborative environments like high-performance computing. Traditional directories fail to manage these iden

Academia has always been about pushing boundaries—whether in knowledge, technology, or collaboration. But as research grows more complex and reliant on technology, so too does the need to address a hidden layer of identity management. I’m talking about non-human identities (NHIs): those workloads, APIs, batch jobs, and software systems that work tirelessly behind the scenes. This is more than service accounts and bots. This is the underlying infrastructure for modern IT systems.

NHIs aren’t a new concept, but how we manage them today isn’t just outdated—it’s risky. Let’s dig in.

What Are NHIs?

Think about the processes that underpin research in a university. Automated data collection? That’s an NHI. Research simulations running on high-performance computing (HPC) systems? Also NHIs. APIs that manage sensitive student and research data? You guessed it—NHIs. These identities are everywhere, yet we still treat them like human users in many cases, with joiner/mover/leaver workflows and directory mappings.

And while this “fit them into the human box” approach might work on a small scale, it doesn’t secure the infrastructure they’re tied to. That’s a problem.

Why NHIs Are a Challenge

NHIs often inherit the same challenges as their human counterparts, only amplified by scale and complexity. Here’s a snapshot of the issues:

Token Sprawl: OAuth account tokens being passed around like candy at Halloween. (I feel like I need to make an analogy about cavities and decay, but I’ll just leave that here because iew.) Access Management: Misaligned permissions, often shared across workloads, create opportunities for breaches. Auditing and Compliance: Many HPC environments and collaborative research projects struggle to track what access NHIs have, much less prove compliance with regulations. Security Gaps: Relying on directories and manual processes doesn’t cut it when workloads operate across different systems and organizations.

A common example? Research collaboration in HPC environments. These systems often involve shared resources accessed by NHIs with wildly varying permissions. Without precise controls, compliance becomes a nightmare, and auditing feels like playing whack-a-mole with invisible targets.

Directories: The Bottleneck We Can’t Ignore

But wait! We have directories to keep everything organized! Won’t that help? (All my enterprise IAM friends just did a full-body cringe reading that.)

Here’s the thing about directories: they’re fantastic for managing human identities in traditional environments. But when it comes to NHIs, directories quickly become a bottleneck. Why? Because they assume every identity—human or non-human—can be neatly slotted into a joiner-mover-leaver model.

For NHIs, this model is fundamentally flawed:

No Natural Lifecycle: Workloads, APIs, and batch jobs don’t “move” or “leave” in the same way people do. They’re created and destroyed based on operational needs, often spinning up and down in milliseconds. A directory simply can’t keep pace with this churn. Token Dependency: OAuth tokens are often used as a workaround, passed around to grant temporary access. But this approach doesn’t scale—it’s prone to sprawl, lacks visibility, and creates security risks when tokens are misused or stolen. Lack of Context: Directories were designed for human-centric workflows, meaning they lack the context required to manage the nuanced relationships NHIs have with systems, resources, and data.

The result? Academic IAM systems often end up overburdened and unable to scale to the demands of modern, complex environments. Imagine trying to cram a sprawling HPC infrastructure into a directory originally built to manage faculty and students—it’s like forcing a square peg into a round hole.

The Role of DevOps, IT, and IAM Teams

Managing NHIs isn’t a one-team job—it’s a cross-functional effort. DevOps and IT teams usually own the operational infrastructure, while IAM teams handle policy enforcement. But these groups often speak different “languages,” making collaboration tricky.

That’s where standards and architecture frameworks come in. Efforts like the IETF’s WIMSE draft aim to create a shared understanding of how to secure NHIs in multi-system environments. It’s a step in the right direction, but adoption isn’t straightforward.

Building Better NHI Management

So, how can academia start tackling the NHI problem more effectively?

Establish Clear Ownership: Decide who is responsible for managing NHIs, from provisioning to decommissioning. Adopt Standards: Leverage frameworks like SPIFFE and WIMSE to create consistent, scalable trust models. Learn how to use the Shared Signals Framework and the Continuous Access Evaluation Profile (CAEP). Invest in Automation: Automate the boring stuff, like token issuance and revocation, to reduce human error. (Hot take: CAEP can help here, too.) Foster Collaboration: Create spaces for DevOps, IT, and IAM teams to align on priorities and processes. Looking Ahead

The future of NHIs in academia isn’t just about solving today’s problems—it’s about enabling the next generation of research. Imagine a world where workload identities are as dynamic as the systems they operate in, seamlessly supporting complex collaborations across institutions. Standards and open-source tools will be key to making that vision a reality.

But here’s the catch: it’s not just a technical challenge. NHIs require governance, funding, and attention from leadership to ensure they’re managed sustainably. Without these, even the best tools won’t fix the problem.

I’ll be talking about this at the 2024 Internet2 TechEx in Boston. If you’d like my slides, drop me a note on LinkedIn and I’ll be happy to share!

Reach out if you want to learn more about navigating this process or need support with standards development. With my experience across various SDOs, I’m here to help guide you through the complexities of Internet standards development.

The post Rethinking Identity Management: The Role of Non-Human Identities in Academic Research appeared first on Spherical Cow Consulting.


Dock

Dock is partnering with Socure to revolutionize digital identity verification

We're excited to share that Dock is partnering with Socure to revolutionize digital identity verification! Socure’s mission has always been clear: verify 100% of good identities in real-time and eliminate identity fraud. With over 2,600 customers across financial institutions, government agencies,

We're excited to share that Dock is partnering with Socure to revolutionize digital identity verification!

Socure’s mission has always been clear: verify 100% of good identities in real-time and eliminate identity fraud. With over 2,600 customers across financial institutions, government agencies, and leading enterprises, they’re proud to be the gold standard in digital identity verification.

Now, by teaming up we’re taking these capabilities to the next level. 

The partnership allows us to combine their AI-driven analytics with our decentralized identity infrastructure to offer a more flexible, secure, consumer-centric identity solution.

We’re thrilled about what’s ahead and can’t wait to see the innovative solutions we’ll build together. 

Stay tuned for more updates on how we’re redefining trust in the digital world!


Thales Group

Thales’s Friendly Hackers unit invents metamodel to detect AI-generated deepfake images

Thales’s Friendly Hackers unit invents metamodel to detect AI-generated deepfake images prezly Wed, 11/20/2024 - 09:00 As part of the challenge organised by France's Defence Innovation Agency (AID) to detect images created by today’s AI platforms, the teams at cortAIx, Thales’s AI accelerator, have developed a metamodel capable of detecting AI-generated deepfakes. The Thales met
Thales’s Friendly Hackers unit invents metamodel to detect AI-generated deepfake images prezly Wed, 11/20/2024 - 09:00 As part of the challenge organised by France's Defence Innovation Agency (AID) to detect images created by today’s AI platforms, the teams at cortAIx, Thales’s AI accelerator, have developed a metamodel capable of detecting AI-generated deepfakes. The Thales metamodel is built on an aggregation of models, each of which assigns an authenticity score to an image to determine whether it is real or fake. Artificially generated AI image, video and audio content is increasingly being used for the purposes of disinformation, manipulation and identity fraud.

Artificial intelligence is the central theme of this year’s European Cyber Week from 19-21 November in Rennes, Brittany. In a challenge organised to coincide with the event by France's Defence Innovation Agency (AID), Thales teams have successfully developed a metamodel for detecting AI-generated images. As the use of AI technologies gains traction, and at a time when disinformation is becoming increasingly prevalent in the media and impacting every sector of the economy, the deepfake detection metamodel offers a way to combat image manipulation in a wide range of use cases, such as the fight against identity fraud.

AI-generated images are created using AI platforms such as Midjourney, Dall-E and Firefly. Some studies have predicted that within a few years the use of deepfakes for identity theft and fraud could cause huge financial losses. Gartner has estimated that around 20% of cyberattacks in 2023 likely included deepfake content as part of disinformation and manipulation campaigns. Their report1 highlights the growing use of deepfakes in financial fraud and advanced phishing attacks.

“Thales’s deepfake detection metamodel addresses the problem of identity fraud and morphing techniques,”[1] said Christophe Meyer, Senior Expert in AI and CTO of cortAIx, Thales’s AI accelerator. “Aggregating multiple methods using neural networks, noise detection and spatial frequency analysis helps us better protect the growing number of solutions requiring biometric identity checks. This is a remarkable technological advance and a testament to the expertise of Thales’s AI researchers.”

The Thales metamodel uses machine learning techniques, decision trees and evaluations of the strengths and weaknesses of each model to analyse the authenticity of an image. It combines various models, including:

The CLIP method (Contrastive Language-Image Pre-training) involves connecting image and text by learning common representations. To detect deepfakes, the CLIP method analyses images and compares them with their textual descriptions to identify inconsistencies and visual artefacts. The DNF (Diffusion Noise Feature) method uses current image-generation architectures (called diffusion models) to detect deepfakes. Diffusion models are based on an estimate of the amount of noise to be added to an image to cause a “hallucination”, which creates content out of nothing, and this estimate can be used in turn to detect whether an image has been generated by AI. The DCT (Discrete Cosine Transform) method of deepfake detection analyses the spatial frequencies of an image to spot hidden artefacts. By transforming an image from the spatial domain (pixels) to the frequency domain, DCT can detect subtle anomalies in the image structure, which occur when deepfakes are generated and are often invisible to the naked eye.

The Thales team behind the invention is part of cortAIx, the Group’s AI accelerator, which has over 600 AI researchers and engineers, 150 of whom are based at the Saclay research and technology cluster south of Paris and work on mission-critical systems. The Friendly Hackers team has developed a toolbox called BattleBox to help assess the robustness of AI-enabled systems against attacks designed to exploit the intrinsic vulnerabilities of different AI models (including Large Language Models), such as adversarial attacks and attempts to extract sensitive information. To counter these attacks, the team develops advanced countermeasures such as unlearning, federated learning, model watermarking and model hardening.

In 2023, Thales demonstrated its expertise during the CAID challenge (Conference on Artificial Intelligence for Defence) organised by the French defence procurement agency (DGA), which involved finding AI training data even after it had been deleted from the system to protect confidentiality.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

1 2023 Gartner® Report on Emerging Cybersecurity Risks.

2 Morphing involves gradually changing one face into another in successive stages by modifying visual features to create a realistic image combining elements of both faces. The final result looks like a mix of the two original appearances.

/sites/default/files/prezly/images/MD%28c%29Thales.png Documents [Prezly] PR_Thales’s Friendly Hackers unit invents metamodel to detect AI-generated deepfake images.pdf Contacts Cédric Leurquin 20 Nov 2024 Type Press release Structure Defence and Security Defence Artificial intelligence is the central theme of this year’s European Cyber Week from 19-21 November in Rennes, Brittany. In a challenge organised to coincide with the event by France's Defence Innovation Agency (AID), Thales teams have successfully developed a metamodel for detecting AI-generated images. As the use of AI technologies gains traction, and at a time when disinformation is becoming increasingly prevalent in the media and impacting every sector of the economy, the deepfake detection metamodel offers a way to combat image manipulation in a wide range of use cases, such as the fight against identity fraud. prezly_706665_thumbnail.jpg Hide from search engines Off Prezly ID 706665 Prezly UUID 451f9d57-a15f-467e-aa00-7bf7539d0ddc Prezly url https://thales-group.prezly.com/thaless-friendly-hackers-unit-invents-metamodel-to-detect-ai-generated-deepfake-images Wed, 11/20/2024 - 10:00 Don’t overwrite with Prezly data Off

Tuesday, 19. November 2024

KuppingerCole

Identity Security and Management – Why IGA Alone May Not Be Enough

Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape. While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility, security and control. Mo

Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape.

While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility, security and control.

Modern identity management requires solutions that can bridge the gap between IGA and directory management. Advanced tools can consolidate visibility across hybrid environments, provide fine-grained control, and enhance delegation capabilities. These solutions complement IGA by addressing the limitations of native directory management and improving overall security posture.

Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will look at the challenges of breadth vs. depth in managing target systems, but also the common scenario of different teams being responsible for different parts of the infrastructure such as the IGA solution vs. Microsoft Active Directory. He will provide insights not only when to use multiple solutions, but also discuss approaches on a TOM (Target Operating Model) that leads to consistent management of diverse environments.

Robert Kraczek, Global Strategist at One Identity will showcase how solutions like Active Roles can serve as connectors to various directories, providing a single pane of glass for hybrid environments. He will demonstrate how these tools enhance security, improve efficiency, and complement existing IGA solutions to address the complexities of modern identity ecosystems.




Radiant Logic

ITGC Controls: Why Are They Essential And How To Execute Them?

ITGCs are an essential part of your strategy for securing and enforcing access rights. Find out why... And how to optimize them! The post ITGC Controls: Why Are They Essential And How To Execute Them? appeared first on Radiant Logic.

CyberArk Privilege Cloud: Protect your Privileged Accounts with a SaaS Solution

Learn more about the benefits of CyberArk's Privilege Cloud, a PAM solution in SaaS mode, and discover ways to extend its capabilities. The post CyberArk Privilege Cloud: Protect your Privileged Accounts with a SaaS Solution appeared first on Radiant Logic.

Are User Access Review And Access Recertification The Same Thing?

The user access review is a control function that is separate from the recertification of access rights. Learn more about why and when to use it. The post Are User Access Review And Access Recertification The Same Thing? appeared first on Radiant Logic.

Is Your European Company Prepared For The Digital Operational Resilience Act (DORA)?

Uncover DORA regulations, five key focus areas, and the significance of operational resilience in finance get compliant with Radiant Logic's secure software solutions. The post Is Your European Company Prepared For The Digital Operational Resilience Act (DORA)? appeared first on Radiant Logic.

Reducing IAM Technical Debt with an Identity Data Fabric Approach 

Gartner lists 5 key challenges that result from IAM technical debt; get our four step approach to a solution based on our Identity Data Fabric. The post Reducing IAM Technical Debt with an Identity Data Fabric Approach  appeared first on Radiant Logic.

ISMG Survey Finds that Many Identity Teams Lack Visibility and Operational Maturity

ISMG research surveyed over 100 IT leaders on their IAM challenges, and we’re pleased to share the results with you. The post ISMG Survey Finds that Many Identity Teams Lack Visibility and Operational Maturity appeared first on Radiant Logic.

Spring is Springing: What’s New from Radiant Logic in Spring 2024

Learn what Radiant Logic is bringing to identity in 2024. Our spring release makes it easy to connect, manage, and govern identity data—see what AI can do for identity. The post Spring is Springing: What’s New from Radiant Logic in Spring 2024 appeared first on Radiant Logic.

Making Identity Hygiene a Non-Negotiable for Organizational Security

Identity hygiene is the number one common denominator in any IAM program.  Without clean data there are no accurate results.   The post Making Identity Hygiene a Non-Negotiable for Organizational Security appeared first on Radiant Logic.

Artificial Intelligence and Identity and Access Management

Dive into the powerful influence will AI-Driven IAM, IGA, Generative AI for IAM have in 2024, and what advantages will you find with an IAM Copilot on your side. The post Artificial Intelligence and Identity and Access Management appeared first on Radiant Logic.

Revolutionizing IAM with RadiantOne AI and AIDA

Learn how generative AI technology will revolutionize the way organizations govern and visualize identity data with unprecedented speed and accuracy. The post Revolutionizing IAM with RadiantOne AI and AIDA appeared first on Radiant Logic.

Thales Group

Protecting aircraft with artificial intelligence: Thales and partners selected for first European project to develop sovereign AI for embedded cyberdefence

Protecting aircraft with artificial intelligence: Thales and partners selected for first European project to develop sovereign AI for embedded cyberdefence prezly Tue, 11/19/2024 - 14:00 Thales has been selected for the Artificial Intelligence Deployable Agent (AIDA) project funded by the European Commission through the European Defence Fund (EDF). A total of 28 European industry
Protecting aircraft with artificial intelligence: Thales and partners selected for first European project to develop sovereign AI for embedded cyberdefence prezly Tue, 11/19/2024 - 14:00 Thales has been selected for the Artificial Intelligence Deployable Agent (AIDA) project funded by the European Commission through the European Defence Fund (EDF). A total of 28 European industry partners, start-ups and research centres have joined forces on this project to develop a sovereign AI-enabled cybersecurity agent to protect aircraft systems from cyberattacks. The goal of this three-and-a-half-year European project is to design an AI with an autonomous or semi-autonomous response capability to provide cybersecurity protection for aircraft systems such as onboard computers and electronic warfare systems on combat aircraft, which are vulnerable to increasingly sophisticated cyberattacks in today’s high-intensity conflicts. AIDA is the first European structural framework project in support of the NATO concept of Autonomous Intelligent Cyberdefence Agent (AICA).1

 

Thales is technical coordinator for the AIDA project funded by the European Commission, with CR14 in Estonia in charge of overall project coordination.


This EDF project is a response to three major challenges faced by the armed forces today: attack surfaces are growing due to battlespace digitisation; the cyberattack detection-response chain needs to be automated due to the ever-greater use of autonomous systems such as drones and robots; and AI is being used ever more widely both to launch and respond to cyberattacks.

Christophe Salomon, Executive Vice President, Secure Communications & Information Systems, Thales: “This project initiated by the European Union is fundamental to the security of our combat systems and the sovereignty of our cyberdefence capabilities. It is a chance for Thales to consolidate its strengths in onboard aircraft systems and sovereign cybersecurity solutions, and a further opportunity to leverage our AI hacking expertise. Thales's AI accelerator, and in particular cortAIx, will be directly involved in the AIDA project. The ultimate goal is to employ AI-enabled techniques for detecting threats and protecting aircraft systems from the growing risks and dangers encountered in today’s high-intensity, technology-driven conflicts.”

Responding to the 2023 European Defence Fund call for projects for the development of deployable autonomous AI agents,1 Thales submitted an innovative proposal based on the training of intelligent cyberdefence agents capable of identifying, protecting, detecting and responding to cyberthreats in real time in the five military operating domains:2 land, air, sea, space and cyberspace.

Thales will also lead the project to develop a prototype aircraft using frugal AI agents to protect electronic warfare equipment installed on combat aircraft. This prototype will be tested, using Thales’s Cybels Analytics solution in particular, in scenarios including cyber-electromagnetic threats and advanced adversarial AI attacks.

AI is being used increasingly in the theatre of operations to increase the detection performance of air defence radars, for example, and to help plan tactical missions and assign tasks to swarms of drones and robotic systems. This type of AI must be reliable, robust and cybersafe to prevent it being exploited by hostile forces in any environment (land, sea, air, space and cyberspace). To counter this type of threat, Thales’s Friendly Hacker Unit will conduct a battery of adversarial AI attacks and define appropriate countermeasures to ensure that these cyberdefence AI agents can never become targets themselves.

Global leader in data protection and cybersecurity

As a world leader in cybersecurity, with more than 5,800 experts in 68 countries, Thales is involved at every stage in the civil and defence value chain: Identify, Protect, Detect, Respond, Restore. Thales develops sovereign products including encryptors and sensors for governments and institutions to protect their critical information systems, as well as sovereign cyberthreat detection products to protect embedded and onboard systems. Thales is a trusted partner of the Galileo satellite navigation system, operating a number of national encryption laboratories in Europe and supplying NATO member countries with the only tactical IP encryptor with "Cosmic Top Secret" security certification. Thales is also a strategic partner of the German, UK, French and Belgian defence ministries for the construction and handover of key management centres and infrastructure.

AI at Thales

Thales is a major player in trusted, cybersafe, transparent, explainable and ethical AI for armed forces, aircraft manufacturers and critical infrastructure providers. The Group employs over 600 engineers specialising in AI and around 100 doctoral candidates are conducting their AI research with Thales. Organised within Thales’s AI accelerator for research (AI Lab), systems, including decision support systems, (AI Factory) and sensors, including sonar, radar, radios and optronics, (AI Sensors), these experts are helping to incorporate AI into over 100 of Thales’s products and services. Thales’s AI capabilities draw on the most advanced sensor and system technologies to address the full spectrum of user requirements in the defence, aviation, space, cybersecurity and digital identity industries. Trusted AI is designed to meet the specific security and sovereignty needs of Thales’s customers. It brings greater efficiency to data analysis and decision support and speeds up the detection, identification and classification of objects of interest and target scenes, while taking account of specific constraints such as cybersecurity, embeddability and frugality in critical environments.

In 2023, the Group was Europe’s top patent applicant in the field of AI for mission-critical systems. Also in 2023, the Group's Friendly Hacker Unit demonstrated its credentials at the CAID challenge (Conference on Artificial Intelligence for Defence) organised by the French defence procurement agency (DGA), which involved finding AI training data even when it had been deleted from the system to preserve confidentiality.

Thales’s European partners in the AIDA project:

SIHTASUTUS CR14 (CR14)

THALES SIX GTS France (TSGF)

THALES SA (TRT)

THALES AVS FRANCE SAS (TAVS)

THALES DMS FRANCE SAS (TDMS)

INDRA SISTEMAS SA (IND)

LEONARDO - SOCIETA PER AZIONI (LDO)

TELESPAZIO SPA (TPZ)

AIT AUSTRIAN INSTITUTE OF TECHNOLOGY GMBH (AIT)

SPACE HELLAS ANONYMI ETAIREIA SYSTIMATA KAI YPIRESIES TILEPIKOINONIONPLIROFORIKIS ASFALEIAS - IDIOTIKI EPICHEIRISI PAROCHIS YPERISION ASFA (SPH)²

HONEYWELL INTERNATIONAL SRO (HON)

WOJSKOWA AKADEMIA TECHNICZNA IM.JAROSLAWA DABROWSKIEGO (WAT)

EVIDEN TECHNOLOGIES SRL (EVD)

Decent Cybersecurity s. r. o. (DEC)

GYALA S.R.L. (GYA)

FORSVARETS FORSKNINGINSTITUTT (FFI)

SensorFleet Oy (SEN)

NIXU OYJ (NIX)

Aliter Technologies, a.s. (ALI)

THALES EDISOFT PORTUGAL, S.A. (EDI)

HITEC LUXEMBOURG SA-HITEC (HIT)

MINISTERUL APARARII NATIONALE (MET)

INSTITUTO SUPERIOR DE ENGENHARIA DO PORTO (ISEP)

DOTOCEAN (DOT)

WB Electronics S.A. (WBE)

ADVOKAADIBUROO SORAINEN OU (SOR)

HarfangLab SAS (HAR)

AKHEROS SAS (AKH)

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.


​1 Autonomous Intelligent Cyber Defense Agent (AICA): A Comprehensive Guide | SpringerLink: https://link.springer.com/book/10.1007/978-3-031-29269-9

2EDF-2023-DA-CYBER-DAAI: Deployable Autonomous AI Agent

3 Multi-Domain Operations in NATO – Explained – NATO’s ACT: https://www.act.nato.int/article/mdo-in-nato-explained/#:~:text=Within%20NATO’s%20structure%20there%20are,independent%20entities%20within%20national%20militaries.

/sites/default/files/prezly/images/Generic%20banner%20option%202%20%282%29_12.png Contacts Cédric Leurquin 19 Nov 2024 Type Press release Structure Defence and Security Defence Thales is technical coordinator for the AIDA project funded by the European Commission, with CR14 in Estonia in charge of overall project coordination. prezly_705643_thumbnail.jpg Hide from search engines Off Prezly ID 705643 Prezly UUID aa6a8b3b-d7e1-4436-bcc5-dec6318ac74b Prezly url https://thales-group.prezly.com/protecting-aircraft-with-artificial-intelligence-thales-and-partners-selected-for-first-european-project-to-develop-sovereign-ai-for-embedded-cyberdefence Tue, 11/19/2024 - 15:00 Don’t overwrite with Prezly data Off

Tokeny Solutions

Tokeny’s Talent | Satjapong’s Story

The post Tokeny’s Talent | Satjapong’s Story appeared first on Tokeny.
Satjapong Meeklai is Senior DevSecOps Engineer at Tokeny.  Tell us about yourself!

Hi guys. My name is Satjapong Meeklai. I’m Thai, born and live in Bangkok. I’ve always been passionate about technology since I was young so I spent most of my time using computers and the internet to learn new things. After finishing high school, I decided to study computer science at a university in Thailand. And fortunately after graduating, I got a Japanese government scholarship to continue my higher education in Tokyo. After obtaining a master degree, I came back to Thailand and devoted myself as a technician because I love doing hands-on works, trying various types of roles in this field and eventually, decided to focus on becoming a DevSecOps engineer who work in Fintech and Web3 startups because I believe in the industries so I’m trying to make some contributions to this major shift to happen which hopefully will create positive impacts to other people around the world.

What were you doing before Tokeny and what inspired you to join the team?

After completing my master’s degree, I was drawn to data science and AI, so I pursued a role as a data scientist. I joined an early-stage startup focused on sentiment analysis, where I wore many hats due to the small team. Besides building machine learning models, I handled data cleaning, preprocessing, backend development, and DevOps. Through this experience, I discovered a passion for DevOps, leading me to shift my career in that direction.

I continued working in startups, valuing their fast-paced, impactful environments. For four years, I was the lead DevOps engineer at Opn, a Thai fintech company that became a unicorn. Managing a team of nine, I contributed meaningfully to the company’s success, which remains one of my proudest achievements.

While at Opn, I became interested in Blockchain and Web3, eventually leaving to join a small Web3 startup in digital asset custody. Although the company closed due to market challenges, my interest in Web3 grew. This led me to Tokeny, a platform for tokenizing traditional assets, which I see as a bridge to open finance in the Web3 era. I’m excited to help drive this transformation.

How would you describe working at Tokeny?

So far it has been very pleasant for me. People here are kind, nice, but work hard. They’re good at what they do. I can feel the determination of what we want to build and deliver to the community. We are here to create changes. That is what I can tell after working here for several months.

What are you most passionate about in life?

I think not dying in vain would probably be something I’m passionate about the most. I want the existence of myself to have a positive impact and influence on people around me. I no longer dream of changing the world myself but I’d like to support, contribute, and be a part of that something together with others to create positive results for the community, society, country, and/or the world instead. In the end, what I care about the most is myself being in a position where I’m proud and happy about myself, the decisions I made, the things I do, and do not regret how I live and what I’ve done to people around me.

What is your ultimate dream?

Be one of the early employees of an incredibly successful tech company while being a good, caring leader to my family.

What advice would you give to future Tokeny employees?

First you should believe in your own vision. Then try to align that vision with the company. Then you will find that whatever you do, either for yourself or the company, is meaningful by themselves.

What gets you excited about Tokeny’s future?

I’m excited about our mission to build the world of open finance and how this will change the world of traditional finance. It’s pretty interesting to see what will change in 5 to 10 years from now with the power of the Web3 industry and Tokeny.

He prefers: check

Coffee

Tea

Movie

check

Book

Work from the office

Work from home

check

Hybrid

check

Dogs

Cats

Call

check

Text

check

Burger

Salad

check

Mountains

Ocean

Wine

check

Beer

check

Countryside

City

check

Slack

Emails

check

Casual

Formal

check

Crypto

Fiat

Night

check

Morning

More Stories  Tokeny’s Talent | Shurong 18 September 2024 Tokeny’s Talent|Héctor’s Story 29 July 2022 Tokeny’s Talent | Cristian 13 June 2024 Tokeny’s Talent|Cyrille’s Story 17 September 2021 Tokeny’s Talent|Thaddee’s Story 2 June 2022 Tokeny’s Talent|Alexis’ Story 26 October 2022 Tokeny’s Talent | Denisa 26 October 2023 Tokeny’s Talent|Joachim’s Story 23 April 2021 Tokeny’s Talent | Omobola 25 July 2024 Tokeny’s Talent|José’s Story 19 August 2021 Join Tokeny Solutions Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Satjapong’s Story appeared first on Tokeny.


KuppingerCole

Analyst's View: Synthetic Data

by Anne Bailey Synthetic data generation is a highly innovative solution to challenges of test data quality, data sharing, and data privacy and security. Senior Analyst Annie Bailey shares insights from the inaugural Leadership Compass on Synthetic Data on the dynamic development of this market.

by Anne Bailey

Synthetic data generation is a highly innovative solution to challenges of test data quality, data sharing, and data privacy and security. Senior Analyst Annie Bailey shares insights from the inaugural Leadership Compass on Synthetic Data on the dynamic development of this market.

Monday, 18. November 2024

Spruce Systems

Industry Spotlight: Top 10 Ways Verifiable Digital Credentials Can Transform Government

Explore how verifiable digital credentials can address challenges in government identity systems, offering secure, efficient, and privacy-focused solutions for a range of applications.
A Need for Verifiable Digital Credentials in Government

Government agencies face significant challenges in delivering secure, reliable identity credentialing and verification processes that are built for today’s digital world. Protecting residents' data from unauthorized access is essential, as is providing secure, accessible ways for residents to easily verify their identities across digital and physical channels as they go about their day-to-day.

The outdated, paper-based systems that exist today slow down government processes and introduce vulnerabilities, such as fraud, inefficiencies, and elevated administrative costs. These challenges not only affect the security and privacy of residents’ data, but also put a strain on government resources. We believe that to meet the demands of a digital-first society, agencies must transition away from paper-based credentials, which are vulnerable to tampering, to secure, verifiable digital credentials. Read on to learn more about the top 10 real world applications in government today, and how SpruceID is helping partner with agencies for digital transformation.

Today’s Top 10 Real-World Applications

When it comes to verifiable digital credentials in government, 10 use cases is just barely scratching the surface. However, the list below outlines several in-demand applications today where digital credentials bring significant advantages, greatly benefiting both government entities and the people they serve:

Mobile Driver’s License (mDL)Physical IDs may be the norm, but they are easily lost, stolen, or damaged, making residents vulnerable to identity theft and fraud. Law enforcement, businesses, and government agencies spend valuable time verifying IDs, and the reliance on physical cards slows down services and increases errors. With high-assurance verifiable digital credentials (VDCs), verification becomes faster, more secure, and far less vulnerable to tampering with. The added convenience and security offered by a mobile driver’s license creates a streamlined, fraud-resistant environment where residents don’t have to rely on easily compromised physical cards. Read more about how SpruceID helped the State of California implement their mobile driver’s license program, and the benefits they’ve seen so far. Outdoor Licenses and PermitsToday’s outdoor licenses and permits (such as boating or fishing licenses) are largely paper-based, which are easy to lose or counterfeit, and enforcing them can be difficult. Conservation officers lack real-time verification tools, making enforcement difficult and allowing illegal activities to go unchecked. Digital permits with VDCs provide instant, reliable proof with an easy way to verify the credentials, supporting conservation efforts and reducing illegal activities—all while protecting public lands and waters. Learn about how SpruceID worked with Utah to launch digital off-road vehicle permits and how they’ve benefitted. Incarcerated Individuals and Criminal Justice and Law EnforcementToday, approximately 27% of formerly incarcerated individuals are unemployed. This statistic highlights the significant barriers to employment these individuals face, particularly in accessing proper identification. The criminal justice system’s reliance on outdated, paper-based records not only creates vulnerabilities in identity management and record accuracy but also complicates access to essential rehabilitative services. These inefficiencies lead to security risks, identity errors, and hindered re-entry support. Verifiable digital credentials can help facilitate access to job applications, housing, and social services, while removing barriers to re-enter society and rebuild their lives. Marriage and Birth CertificatesPaper marriage certificates, birth certificates, and even social security cards are essential but vulnerable to loss, damage, and forgery, which complicates access to legal rights and government services. Verifying these documents can also be a slow process, creating roadblocks for individuals needing to prove familial status for health benefits, citizenship, and legal matters. VDCs ensure secure, instant access to these vital records, protecting individuals’ identities and preventing fraudulent claims. They can also help streamline processes such as enrolling your new baby onto your health insurance — as discussed in our recent blog post. Social Services Access (SNAP/Medicaid)Accessing social services with paper-based documentation is cumbersome and prone to errors. Individuals who qualify may face delays or rejections, while ineligible recipients can exploit the system, diverting funds from those in need. By using VDCs, agencies can improve efficiency and reduce fraud by allowing for real-time verification of eligibility, ensuring benefits reach the right individuals faster and reducing strain on the social services infrastructure. Civic ParticipationFraud and manipulation risks increase, threatening the integrity of civic participation such as responding to RFCs (requests for comments) or submitting feedback to political representatives. Verifiable digital credentials create a secure, accessible way to ensure that for example, someone is a resident and not a bot, without oversharing information. This approach has also been considered for simultaneously improving our voting systems’ security and engagement with the new generation. Land and Property RecordsPaper-based land records can be misplaced, tampered with, or falsified, leading to property disputes, unclear ownership, and legal issues that impact families and businesses. To mitigate these issues and more, VDCs provide a secure way to manage property records, ensure property rights are protected, and enhance transparency in property ownership. Disaster ReliefIn times of disaster, quickly verifying the identity and eligibility of individuals seeking relief is crucial but challenging with traditional paper documents. When someone loses their paper documents, aid can be delayed, misallocated, or vulnerable to fraud, hindering the response and leaving affected people without timely support. VDCs allow for quick, secure verification of those in need, ensuring relief reaches the right people and enable response teams to act efficiently during critical moments. Our credentials are accessible even in remote areas, without wifi or cell service. Government Employee Access and VerificationCurrent reliance on physical IDs for government employees and veterans can lead to unauthorized access, fraud, and security breaches. Verifiable digital credentials provide a secure way to verify identity of government employees such as military or veterans, protecting restricted spaces and sensitive information, while allowing instant access to necessary services and benefits that are exclusive to military and veterans, among other government employees. Cross-Border Travel CredentialsPhysical cross-border travel documents such as customs clearance forms, visas, and health certificates and more are vulnerable to forgery and theft, creating security risks and causing delays at border crossings. VDCs offer a way to consolidate identity, customs, and health credentials into one streamlined verification process. This speeds up clearance, improves safety, and enhances global security and health compliance, delivering a more efficient experience for travelers and border authorities alike.

SpruceID’s Solution

SpruceID works with a variety of public sector agencies to issue verifiable digital credentials, creating a system of trust, security, and convenience that can be applied across numerous government applications. Our Credible platform supports issuing a range of digital credentials, from mobile driver’s licenses (mDLs) to professional certifications. These credentials use cryptographic digital signatures, ensuring that they cannot be falsified, shared through screenshots, or recreated by AI-generated deepfakes.

Our solutions prioritize minimal disclosure of personal data, enabling residents to verify credentials with only essential information (for example, only needing to show your age to enter a bar while keeping your personal address hidden). This keeps personal data secure and compliant with privacy regulations, all while eliminating government tracking or surveillance. In addition, Credible helps to drive increased efficiency, as digital processes streamline verifications and reduce administrative bottlenecks, ultimately saving time for both agencies and residents. By minimizing reliance on paper, agencies significantly lower overhead costs related to printing, mailing, and administrative handling, creating a cost-effective, privacy-centric solution.

Shaping the Future of Digital Identity in Government

We envision a future where government agencies fully leverage verifiable digital credential solutions that align with standards and advance alongside open-source industry collaborations. 

Through partnerships with agencies such as the California DMV, we demonstrate our commitment to creating scalable, interoperable solutions. By embracing VDCs, government agencies can protect citizens, enhance services, and reduce administrative burdens, empowering everyone securely and efficiently. To learn more about how SpruceID could help your government agency, visit our website and get in touch with us.

Contact Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


liminal (was OWI)

Solving for Trust by Design: The Identity Authorization Network Opportunity

The post Solving for Trust by Design: The Identity Authorization Network Opportunity appeared first on Liminal.co.

The Market for Identity Authorization Networks

The post The Market for Identity Authorization Networks appeared first on Liminal.co.

Sunday, 17. November 2024

KuppingerCole

Cyber Hygiene in the Age of AI

Matthias and Christopher discuss the critical importance of cyber hygiene in the corporate context, especially in light of evolving threats such as AI-driven attacks, deepfakes, and ransomware. They emphasize the need for organizations to train employees on recognizing and responding to these threats, as well as the role of technology in both perpetrating and preventing cybercrime. The discussion

Matthias and Christopher discuss the critical importance of cyber hygiene in the corporate context, especially in light of evolving threats such as AI-driven attacks, deepfakes, and ransomware. They emphasize the need for organizations to train employees on recognizing and responding to these threats, as well as the role of technology in both perpetrating and preventing cybercrime. The discussion also touches on the growing issue of disinformation and the necessity for vigilance in verifying information.



Friday, 15. November 2024

Holochain

The Holochain Foundation is Coming of Age

Organizational Shifts to Support Delivery

In the open-source world, there is a well-known dilemma: it’s very difficult to find funding for deep infrastructure-level projects. We knew, when envisioning Holochain, what we wanted to bring into the world — the capacity for groups of people to create digital spaces in which to engage without the need for any intermediaries or web servers. We wanted it to work with nothing but the computers of the very groups of people wanting to engage.

Well, that counts as a deep open-source infrastructure-level project. So we came up with a strategy to create a company that needed Holochain’s new infrastructural capacity, and could market its business proposition and plan. But instead of being owned by venture capital, it would be owned by an open-source foundation on behalf of all the eventual users of that infrastructure.

That vision led us to launch Holo as a distributed cloud hosting company for Holochain apps, that would also use Holochain itself to manage that cloud infrastructure, and do the value accounting between hosts and app providers. This was a complex and tall order.  And so for the past years, all of us in the Holo/Holochain world have been driving primarily to meet Holo’s needs, as we’ve been implementing Holochain itself. Admittedly, it’s taken us significantly longer than we initially thought to build out the depth of the Holochain feature set, along with the complex infrastructure that Holo needed to offer generalized Holochain-based cloud hosting. It’s been hard going. In the meantime, the world has also changed around us, revealing new demands of where and how Holochain actually wants to be used in today’s market, demands that are different from what we initially envisioned.

To meet these changes, and to increase our capacity to deliver, we are updating our strategy, which means a significant organizational restructuring. You can read the announcement of this restructuring on the Holo.host press site.

But what does this mean more specifically for the Holochain Foundation? Foremost, the Foundation will move from being a passive holder of the Holochain intellectual property, on behalf of the community, into being the active operational entity supporting and managing the Holochain development team.

Part of our “coming of age” is realizing that we can’t do everything we might like. Focus matters. Our strategic plan for delivering on our mission of “fostering the evolution and thriving of the Holochain framework and related ecosystems”, begins with just one thing: The stability and reliability of Holochain, such that it can be deployed as industrial-strength, mission-critical infrastructure by commercial projects as well as our community stakeholders. What this means in practice is testing, testing, testing! This includes:

Continued build-out of our “Wind Tunnel” performance testing framework so that we can verify that Holochain’s operating envelope, across each one of its features, meets or exceeds the demands of the specific projects currently bringing Holochain applications to market. Ensuring the sufficiency of testing code coverage of each of Holochain’s key features and undertaking any refactors necessary to bring them in line with our stakeholder’s needs. Upgrading our release patterns so that stakeholders delivering mission-critical apps can conditionally enable the more experimental and not-fully-tested features, while those stakeholders who are on the bleeding edge can help develop and test those very features.

We believe that the Holochain Foundation’s coming-of-age shifts us into delivering on our mission by serving our stakeholders better. We are deeply committed to our partners and ecosystem stakeholders that are currently delivering or developing Holochain apps, both the newer ones like the Volla and its Messages app for the new Quintus Volla Phone, data provenance solutions from Kwaxala, verified data for semi-fungbile token solutions including the recently released Jade City vaults, decentralized game data for esports, the new Visible Verification project, and the a project with Carbistry in the voluntary carbon market domain; and also to those partners who’ve been with us for a long time, like Darksoil Studio, Humm Hive, Neighbourhoods, Coasys, Lightningrod Labs, Carbon Farm Network, Valueflows, and others; and of course, supporting the features needed by the reorganized Holo and HoloFuel organizations. We are also preparing for larger-scale adoption of Holochain solutions in 2025, including collaborations with major players in the industrial supply chain and media industries.

As part of the reorganization and living into our commitment to focus, I will be stepping in as Executive Director of the Holochain Foundation. This will allow our current Executive Director, Mary Camacho, to concentrate on Holo’s new direction, as well as her passion of enabling commercial projects both directly and via new structures within the Foundation.

There’s much more in store for the Foundation going forward, especially around expanded and more formal structures for stakeholder involvement and engagement in Holochain’s development. This will take us a while to roll out, but you can expect more details in the next months. If your work depends on Holochain and you have feedback for what you would like to see from the Foundation going forward, please email me and Paul at feedback2024@holochain.org [Editor: we have changed this address since publication; please try this new address if you couldn't get through before].

Thanks for being with us as we grow up and into our next phase.

– Eric Harris-Braun

PS: We’ve been working on an update to the Holochain White Paper this year, and it’s finally published! It makes the claim for a practical Byzantine fault tolerant system for everyday use, as distinct from systems that are robust but costly in practice. It’s accompanied by another paper, Players of Ludos, which tells the story of how Holochain works through the activities of a group of nomadic board game players.

Cover photo by Anton Sobotyak on Unsplash


KuppingerCole

Jan 21, 2025: Navigating the End of SAP IDM: Future-Proofing Identity Security and Compliance

The impending end-of-life for SAP Identity Management (IDM) presents a critical juncture for organizations relying on this solution. As support winds down by 2027, with extended maintenance until 2030, businesses face urgent challenges in maintaining robust identity and access management frameworks. This transition period offers a unique opportunity to modernize and unify identity security and gove
The impending end-of-life for SAP Identity Management (IDM) presents a critical juncture for organizations relying on this solution. As support winds down by 2027, with extended maintenance until 2030, businesses face urgent challenges in maintaining robust identity and access management frameworks. This transition period offers a unique opportunity to modernize and unify identity security and governance strategies.

IDnow

Fraud in 2024: IDnow customers have their say.

We explore some of the challenges our customers have faced this year and how they plan to tackle fraud in 2025. By the end of this year, more than 70 billion identity verification checks will have been made. In a world of just 8 billion, these numbers appear absolutely staggering.   However, when you consider how […]
We explore some of the challenges our customers have faced this year and how they plan to tackle fraud in 2025.

By the end of this year, more than 70 billion identity verification checks will have been made. In a world of just 8 billion, these numbers appear absolutely staggering.  

However, when you consider how frequently people have their identity verified in this ‘always on, always connected’ world, the number is perhaps not as high as it would originally seem. 

Nowadays, people have their identity verified and reverified without giving it much thought, undergoing data and document checks and age verification to use most digital services. In the not-too distant past, if you wanted to open a bank account, rent a vehicle or use a particular service, you would invariably be required to visit a brick-and-mortar store, clutching at least two forms of paper identification. Even then, the process was unlikely to conclude on the same day, with prospective customers often needing to wait a further series of days until their identity could be verified and they could access said service.  

Nowadays, thanks to a range of automated and in-person identity verification services, this can be done in a matter of minutes, affording unrivalled convenience that many would have thought impossible just a decade ago. Striking a balance between offering an identity verification process that is secure for the business but convenient for the customer is essential. Without it, the business runs the risk of fraud attacks, which can impact its reputation and bottom line and ultimately lead to customer abandonment. 

To discover the challenges that our customers have been facing in 2024 and what next year may look like, we conducted the inaugural IDnow Customer Survey 2024, featuring a number of clients across the UK, France and Germany. Respondents held a variety of positions from Product Managers to Head of Compliance.

Top 3 identity verification challenges in 2024. Increasing operational efficiencies and cutting costs (53%). 
  Keeping conversion rates high (47%). 
  Managing the volume and wide range of different types of fraud attacks / Keeping up with technological developments in fraud and identity verification (both 41%). How our customers tackled fraud in 2024.

The costs of a business falling victim to fraud go way beyond financial. Yes, fraud impacts the bottom line, but it can also have a disastrous effect on company reputation and lead to customers losing trust in the brand.    

To safeguard against this and prevent fraud, six out of 10 of our customers said they had conducted training sessions to enable staff to better identify internal and external fraud risks, while 53% said they had invested in new fraud prevention technologies. Just over a third (35%) said they had deployed multi-layered identity verification procedures, including data, biometric and database checks, such as for PEPs and Sanctions.

UK Fraud Awareness Report 2024 Learn more about the British public’s awareness of fraud and their attitudes toward fraud-prevention technology. Read now Preparing for fraud challenges in 2025.

When asked what the biggest fraud challenge for the year ahead was, an equal number of respondents (59%) cited reputational damage from fraud attacks and the financial cost of tackling and managing fraud. This was followed very closely by just over half (53%) who said they were concerned about how a lack of consumer awareness could lead to increased fraud risks.  

Regarding the types of fraud that customers were most concerned about, 24% of businesses seemed to be most worried about social engineering, such as phishing, while around the same number cited ID document forgery and manipulation. To a lesser extent, customers said that money mules and identity theft (both 18% each) were the primary fraud challenges in 2025. Interestingly, just 12% cited deepfake attacks (despite it becoming an increasingly commonplace method), while just 6% of respondents cited insider threats as the top fraud challenge for 2025. 

When asked how they planned to fight fraud in 2025, the majority of respondents ranked effective training and upskilling staff as the most important action to be taken, followed by access to AI technologies. Internal appointment of new people responsible for fraud fighting and risk mitigation was considered the least important action to take. 

Interestingly, while some businesses have already deployed multi-layered anti-fraud solutions this year, a large majority of businesses expect it to be very important (70%) and somewhat important (12%) going forward. Only 6% claimed that it was not important to them at all. 

At IDnow, we recognize the importance of keeping up to date with the latest developments and techniques in fraud and run regular training sessions and courses for our clients. 

To learn more about how our industry-leading fraud prevention technology can help you fight fraud to safeguard against fake IDs, synthetic identities, deepfakes, social engineering, money mules and more, check out our blog on the role of identity verification in the fight against fraud.

Or for more insights from industry insiders and thought leaders from the world of fraud and fraud prevention, check out one of our interviews from our Spotlight Interview series below.

Jinisha Bhatt, financial crime investigator Paul Stratton, ex-police officer and financial crime trainer Lloyd Emmerson, Director of Strategic Solutions at Cifas Or, discover all about the rise of social media fraud, and how one man almost lost a million euros to a pig butchering scam in our blog, ‘The rise of social media fraud: How one man almost lost it all.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


KuppingerCole

Passwordless Authentication for Enterprises and Consumers: HID​

by Alejandro Leal The password is a remnant of an era before hacking and credential-based attacks became a widespread problem. Although the internet has changed significantly since the early days, passwords have only become longer and more complicated. In parallel, cybercriminals have targeted operating systems with increasing sophistication and frequency as computers have become more accessible w

by Alejandro Leal

The password is a remnant of an era before hacking and credential-based attacks became a widespread problem. Although the internet has changed significantly since the early days, passwords have only become longer and more complicated. In parallel, cybercriminals have targeted operating systems with increasing sophistication and frequency as computers have become more accessible worldwide. For years, IT professionals have discussed the idea of eliminating passwords because they can easily be compromised. In addition, passwords can be costly, time-consuming, and difficult to manage, often resulting in poor user experience. Furthermore, the fact that password reuse is a common practice among customers and employees, only exacerbates the problem. In the context of Customer Identity and Access Management (CIAM), passwordless authentication solutions should have features and capabilities to detect, prevent, and minimize fraudulent activities and unauthorized access within an organization. Effective fraud prevention measures are crucial for protecting both the financial and reputational assets of a business. Passwordless authentication solutions should also support a variety of consumer devices, including smartphones, tablets, laptops, and desktop computers, ensuring seamless access across different platforms and operating systems.

PingTalk

Verifiable Credentials in Decentralized Identity

Understanding API and automated credentials and how they related to decentralized identity 

It’s an exciting time in the world of digital identity. We’re witnessing the convergence of user identification, authentication, and authorization in the palm of our hand – through our biometrically secure mobile devices and digital wallets. As an identity provider taking part in this paradigm shift (commonly referred to as decentralized identity), understanding the types, configuration, and ecosystem of verifiable credentials is crucial. Let’s start with some definitions.

Thursday, 14. November 2024

KuppingerCole

Understanding the Impact of AI on Securing Privileged Identities

Understanding the impact of AI on securing privileged identities has become a critical concern in today's rapidly evolving cybersecurity landscape. As artificial intelligence continues to advance, it presents both opportunities and challenges for organizations striving to protect their most sensitive access points. The rise of AI-powered threats has significantly altered the traditional identity a

Understanding the impact of AI on securing privileged identities has become a critical concern in today's rapidly evolving cybersecurity landscape. As artificial intelligence continues to advance, it presents both opportunities and challenges for organizations striving to protect their most sensitive access points. The rise of AI-powered threats has significantly altered the traditional identity attack chain, requiring a fundamental shift in how we approach privileged identity security.

To combat these emerging threats, organizations must leverage cutting-edge technologies and adopt innovative strategies. By implementing AI-driven security solutions, companies can enhance their ability to detect and respond to sophisticated attacks targeting privileged identities. These advanced systems can analyze vast amounts of data in real-time, identifying anomalous behavior and potential security breaches before they escalate. Additionally, machine learning algorithms can continuously adapt and improve security measures, staying one step ahead of evolving AI-powered threats.

Martin Kuppinger, Principal Analyst at KuppingerCole, will provide expert insights into the changing landscape of privileged identity security in the age of AI. He will discuss the latest trends in AI-driven threats, their impact on the identity attack chain, and offer strategic recommendations for organizations to strengthen their security posture. Martin will also explore the potential of AI as a defensive tool and how it can be leveraged to enhance privileged access management.

Morey J. Haber, Chief Security Advisor at BeyondTrust will share practical experiences and best practices for safeguarding privileged identities against AI-powered threats. He will present three key tips that organizations can implement to protect themselves from emerging AI-driven attacks. Morey will also discuss real-world case studies demonstrating successful strategies for integrating AI into existing security frameworks to bolster privileged identity protection.




TBD on Dev.to

What is Web5?

Web 5 is a decentralized platform that provides a new identity layer for the web to enable decentralized apps and protocols. In the current web model, users do not own their data or identity. They are given accounts by companies and their data is held captive in app silos. To create a new class of decentralized apps and protocols that put individuals at the center, we must empower them wit

Web 5 is a decentralized platform that provides a new identity layer for the web to enable decentralized apps and protocols.

In the current web model, users do not own their data or identity. They are given accounts by companies and their data is held captive in app silos. To create a new class of decentralized apps and protocols that put individuals at the center, we must empower them with self-owned identity and restore control over their data.

Components of Web 5

There are three main pillars of the decentralized web platform, all of which are based on open standards.

Decentralized Identifiers

The identifiers we know and use today are owned by the government, a company, an organization, or some other intermediary. For example, our email addresses and social media handles are identifiers associated with us but are owned and controlled by the service providers. These companies have the right to ban, disable, or delete these identifiers and we have little to no control over this.

So before we can realize truly decentralized applications, we need decentralized identifiers that users own and control. This removes the dependency on centralized entities to authenticate and represent us.

​​Decentralized Identifiers (DIDs) are a W3C standard. They have a standardized structure that essentially links to you and your information.

They are a long string of text that consists of three parts:

the URI scheme identifier, which is did the identifier for a DID method the DID method-specific identifier

DIDs are the only component of Web5 that touch a blockchain, which is generally limited to anchoring the keys/endpoints linked to the ID.

That being said, anchoring DIDs on Bitcoin (or any blockchain) is not a requirement. In fact, what's great about having the standardized formatting for DIDs is that they can be anchored anywhere or not anchored at all and this still works, although with varying levels of decentralization.

Here are examples of DIDs on the Bitcoin blockchain, the Ethereum blockchain, and the web. Notice they all use the same format: scheme, DID method, and DID method-specific identifier.

did:btcr:xyv2-xzpq-q9wa-p7t did:ens:some.eth did:web:example.com

Because personal data is not stored on the blockchain, the DID essentially acts as a URI that associates the subject of the DID (the person, company, or object being identified) with a DID document that lives off-chain.

DID Documents are JSON files stored in decentralized storage systems such as IPFS, and describe how to interact with the DID subject. The DID Document contains things like the DID subject's public keys, authentication and verification methods, and service endpoints that reference the locations of the subject’s data.

{ "@context": "https://www.w3.org/ns/did/v1", "id": "did:ion:EiClkZMDxPKqC9c-umQfTkR8", "verificationMethod": [ { "id": "did:ion:EiClkZMDxPKqC9c-umQfTkR8", "type": "Secp256k1VerificationKey2018", "controller": "did:ion:EiClkZMDxPKqC9c-umQfTkR8" } ], "authentication": ["did:ion:EiClkZMDxPKqC9c-umQfTkR8"] } Verifiable Credentials

Verifiable Credentials are a fully ratified W3C standard that work hand in hand with Decentralized Identifiers to enable trustless interactions - meaning two parties do not need to trust one another to engage, but claims made about a DID subject can be verified.

For example, Alice needs to prove she has a bank account at Acme Bank. Acme Bank issues a cryptographically signed Verifiable Credential which would be stored in Alice's identity wallet.

The credential contains the issuer as Acme and the subject as Alice, as well as the claims, which are Alice's account number and full name.

Upon request for proof of banking, Alice presents the Verifiable Credential that's cryptographically signed by both Alice as well as her bank.

This is an easy, machine-readable way to share credentials across the web. The Verifier does not know or trust Alice, but they do consider Acme trustworthy, and they have essentially vouched for Alice therefore distributing trust.

Decentralized Web Nodes

Today, centralized entities act as our data stores. Applications hold all of our content and preferences on their servers.

Decentralized Web Nodes (DWNs) change this by allowing us to decouple our data from the applications that we use, and instead host our data ourselves in our own personal data stores.

BlueSky is a good example; it's a decentralized social media app. With BlueSky, your tweets and your connections aren't stored with the application. They are stored with you. So you can present your content on any decentralized social media app you want, not just BlueSky.

Your DWNs can hold both public and encrypted data. For example, in the case of a decentralized social media app, you'd want data like your posts and your connections to be public but things like your DMs to be private.

Your decentralized web nodes do not live on the blockchain. You can host your web nodes anywhere (your phone, computer, etc) and can replicate them across your devices and clouds and all data will be synced.

While self-hosting your DWNs provides a means for decentralizing your data, we recognize some users will be more comfortable with others hosting their web nodes for convenience sake. We envision there will be vendors offering to host your web nodes for you. The good part about that is you can encrypt any private data so unlike today where cloud hosts can scan everything that you host there, you can still maintain some privacy even if you have your web nodes hosted by intermediaries.

Your DWNs are associated with your Decentralized Identifiers and are listed in a DID document.

Notice the serviceEndpoint section of the DID doc specifies service endpoints and provides URIs to the decentralized web nodes.

{ "@context": "https://www.w3.org/ns/did/v1", "id": "did:web:example.com:u:alice", "service": [ { "id": "#dwn", "type": "DecentralizedWebNode", "serviceEndpoint": { "nodes": ["https://dwn.example.com", "00:11:22:33:FF:EE"] } } ], "verificationMethod": [ { "id": "did:web:example.com:u:alice", "type": "Secp256k1VerificationKey2018", "controller": "did:web:example.com:u:alice" } ], "authentication": ["did:web:example.com:u:alice"] }

Given an application has the address to your DWN, they can send you a request for data.

This represents a request from an application to obtain all objects within a DWN that follow the SocialMediaPosting schema:

POST https://dwn.example.com/ BODY { "requestId": "c5784162-84af-4aab-aff5-f1f8438dfc3d", "target": "did:example:123", "messages": [ { "descriptor": { "method": "CollectionsQuery", "schema": "https://schema.org/SocialMediaPosting" } }, {...} ] }

The data within DWNs are JSON objects that follow a universal standard, thus making it possible for any application to discover and process the data given its semantic type.

If this data is public, those objects will be automatically returned to the application, and if the data is private, the node owner would need to grant the application access to that data.

Identity Wallets

Obviously all of this is pretty complicated, especially for non-technical users. So we need a simplistic, easy to use interface that will allow people to access and manage their identity.

A well designed identity wallet would provide ways to manage the data stored in decentralized web nodes, the decentralized IDs and the context in which they should be used, verifiable credentials, and authorizations.

Decentralized Web Apps

Web 5 enables developers to build decentralized web applications (DWAs) on top of it and it’s all open source! You're free to use it as your foundation and focus your attention on what you really care about, your app. Web5 brings to DWAs what cloud and application servers bring to enterprise apps. It does the hard part. It brings decentralization. By building your apps on top of Web 5, you get decentralization and identity and data management as part of the platform.

This is definitely a fundamental change in how we exchange data, but it's not a total overhaul of the web we already know. This works like Progressive Web Apps, but you'd add the decentralized web node SDK and then applications are free to really go serverless because the data isn't stored with them.

The sky's the limit to what you can build on top of this platform, but here are some cool basic examples.

Music Applications

No one likes recreating their music playlists over and over again for different apps. With Web 5, you wouldn't have to do that.

In this example, Groove has access to write to Alice's decentralized web node and adds a new entry.

Tidal has access to read from Alice's DWN, so can read the new entry that was added by Groove, and now Alice has her playlist readily available on both apps.

With the continuous utilization of the data across apps, not only do Groove and Tidal get access to Alice's data, but they use it to improve her user experience, thus creating a stronger experience than Alice could have ever gotten had she not used this tech.

Travel Applications

Your travel preferences, tickets, and reservations are scattered across so many different hotels, airlines, rental car agencies and travel apps, making it really difficult to coordinate. Heaven forbid there's any hiccup in the system such as a delayed flight. You end up trying to get in touch with the car rental place to let them know you'll be late for your reservation, and if it's really late, you'd want to call the hotel to ask them not to give away your room. All while you're hustling and bustling at the airport.

Web 5 can help unify these various app experiences.

If Alice gives the hotel, the airline, and the rental car agency access to the Reservation and Trip objects in her DWN, they can react and adjust accordingly to any changes made.

These are just a few applications that can be realized by building on top of Web 5. There's so many more possibilities once the web is truly decentralized the way it was always intended to be.


California DMV Hackathon Win: Privacy-Preserving Age Verification

At the recent California DMV Hackathon, the Block team, represented by members from Square and TBD, won the Best Privacy & Security Design award for building a prototype of an instant age verification system. This solution utilizes mobile drivers’ licenses (mDLs) to provide secure, privacy-centric transactions for age-restricted purchases with Square’s Point of Sale (POS) system. In this po

At the recent California DMV Hackathon, the Block team, represented by members from Square and TBD, won the Best Privacy & Security Design award for building a prototype of an instant age verification system. This solution utilizes mobile drivers’ licenses (mDLs) to provide secure, privacy-centric transactions for age-restricted purchases with Square’s Point of Sale (POS) system.

In this post, we’ll explore the core technical components behind our solution, which centered on using TruAge technology to enable seamless, secure age verification.

How TruAge QR Code Verification Works

At the heart of our prototype is the ability to scan and verify a TruAge Age Token QR code. These QR codes contain a verifiable credential (VC) that confirms a person’s legal age without exposing unnecessary personal information. Here’s a breakdown of how we approached verifying these credentials in our solution.

Decoding the QR Code Payload

The first step in the verification process was reading the QR code provided by the customer. TruAge QR codes follow a standard format which encodes the verifiable presentation (VP) in a compact CBOR format.

Our team implemented a scanner using our open source web5-swift SDK that reads the QR code and decodes the CBOR-encoded payload. This CBOR format is efficient, allowing the verifiable presentation to be transmitted and processed quickly, minimizing any delays at the point of sale.

Converting CBOR to JSON

Once we decoded the CBOR data, the next step was to parse it into a JSON-based verifiable presentation using the W3C Verifiable Credentials (VC) Data Model v1.1. This model is critical to ensuring interoperability across different platforms and services, as it standardizes how credentials are represented and exchanged in a decentralized manner.

Validating the Issuer’s DID

After converting the data into a verifiable format, we needed to validate the digital signature on the credential. We retrieved the issuer’s Decentralized Identifier (DID) from the TruAge server, which provided us access to a sandbox environment containing their list of authorized DIDs.

Using DIDs, we were able to validate the cryptographic signature to ensure that the credential was issued by a trusted TruAge provider. This validation step is critical for ensuring that the credential has not been tampered with and is issued by a legitimate authority.

Credential Content Verification

Once the issuer’s signature was validated, the next step was to check the contents of the verifiable credential itself. In this case, we looked for proof that the individual was over 21 and verified that the credential had not expired.

This lightweight verification process ensures that businesses can quickly and easily confirm a customer’s legal age, while protecting their privacy by not exposing sensitive information like birthdates or addresses.

Building the Integration: Web5 and TruAge Libraries

To bring this solution to life, we used a few key technologies:

iOS: Our team developed the iOS implementation using the web5-swift library, which allowed us to efficiently handle the scanning, decoding, and parsing of the TruAge QR codes on Apple devices.

Android: For Android, we modified the TruAge library provided by Digital Bazaar to make it compatible with our solution. This involved adapting the library for seamless integration with our QR code parsing and validation logic.

Privacy and Security at the Forefront

Our approach ensures that personal information is protected at every stage of the transaction. By focusing solely on verifying the specific data point needed (in this case, whether someone is over 21), we avoid collecting or storing any unnecessary information. This is a win for both businesses and consumers, as it minimizes risk while maintaining a smooth user experience.

By integrating this technology into Square’s Retail POS system, we not only enhanced security but also brought innovative, privacy-preserving solutions to small businesses that need to comply with age verification laws. This prototype has the potential to extend to many other use cases, from secure employee onboarding to identity verification for suppliers and customers.


KuppingerCole

Privileged Access Management (PAM)

by Paul Fisher PAM is crucial for securing privileged access to critical resources, reducing the risk of breaches and insider threats. The market has seen rapid growth with the rise of cloud adoption, digital transformation, and the proliferation of identities across various platforms. Both established vendors and newer entrants are vying for market share, with some focusing on comprehensive ident

by Paul Fisher

PAM is crucial for securing privileged access to critical resources, reducing the risk of breaches and insider threats. The market has seen rapid growth with the rise of cloud adoption, digital transformation, and the proliferation of identities across various platforms. Both established vendors and newer entrants are vying for market share, with some focusing on comprehensive identity security platforms and others offering specialized point privileged access solutions.

Ocean Protocol

DF115 Completes and DF116 Launches

Predictoor DF115 rewards available. DF116 runs Nov 14— Nov 21th, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 115 (DF115) has completed. DF116 is live today, Nov 14. It concludes on November 21st. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&nb
Predictoor DF115 rewards available. DF116 runs Nov 14— Nov 21th, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 115 (DF115) has completed.

DF116 is live today, Nov 14. It concludes on November 21st. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF115 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF116

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF115 Completes and DF116 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 13. November 2024

KuppingerCole

Cloud Backup for AI Enabled Cyber Resilience

Organizations and society have become dependent upon digital services which has increased the business impact of cyber threats and hence the need for cyber resilience. Organizations need to take steps beyond preventing cyber-threats from impacting their digital infrastructure – they must also be able to respond to and recover when incidents occur.  Data backup solutions are an essential eleme

Organizations and society have become dependent upon digital services which has increased the business impact of cyber threats and hence the need for cyber resilience. Organizations need to take steps beyond preventing cyber-threats from impacting their digital infrastructure – they must also be able to respond to and recover when incidents occur.  Data backup solutions are an essential element of every organization’s cyber resilience plan.

In the webinar, Mike Small, Senior Analyst at KuppingerCole Analysts, will look at the status and future of Data Backup, on what organization should consider when defining their own approach cyber resilience, and what the vendor landscape looks like. He will discuss different requirements for Data Backup and solutions in the market meet these.




auth0

Demystifying Multi-Tenancy in a B2B SaaS Application

Why a Multi-Tenant approach is fundamental to B2B SaaS, and how using Auth0 and the Auth0 Organizations feature can help implement it.
Why a Multi-Tenant approach is fundamental to B2B SaaS, and how using Auth0 and the Auth0 Organizations feature can help implement it.

Indicio

Verifiable credentials mature with product launches, implementations

Biometric Update The post Verifiable credentials mature with product launches, implementations appeared first on Indicio.

KILT

KILT Community Update: End of Delegator Staking Rewards

The community has just voted to end KILT Delegator rewards now rather than later — marking a strategic move from incentives toward sustainable growth.The delegator rewards were originally planned to last two years after Golive, but the community extended them for another year. Now, the community has decided to end this additional phase ahead of schedule, signaling a shift in priorities: lower infl

The community has just voted to end KILT Delegator rewards now rather than later — marking a strategic move from incentives toward sustainable growth.The delegator rewards were originally planned to last two years after Golive, but the community extended them for another year. Now, the community has decided to end this additional phase ahead of schedule, signaling a shift in priorities: lower inflation over higher rewards.

What’s Changed?

Delegator Rewards Ended: Effective immediately, Delegator rewards are set to 0%, lowering inflation significantly. However, Delegators can continue staking and play a role in KILT’s governance.

Collator Rewards Remain: Collators keep the KILT network operational, and their rewards will continue to support network reliability.

Why the Change?

Delegator rewards were initially intended to incentivize participation and stake on the most reliable collators, but they were intended to eventually phase out. It was a critical step in KILT’s growth, and the role delegators have played has been invaluable in maintaining the reliability of the KILT network. Now, with a strong foundation in place, the community is ready to move forward by reducing inflation and focusing on KILT’s expanding utility.

With bonding curves, KILT is entering a new era of utility, making the delegator reward incentives no longer necessary.

What’s Next?

The KILT community’s decision showcases the power of decentralized governance in action, bringing about meaningful change that serves both current and future participants. Your input continues to be crucial as KILT rolls out new features and community-driven initiatives.

Gratitude to the KILT Community

A heartfelt thank you to all KILT Delegators, Collators, and community members for your unwavering support, valuable input, and dedication. Together, we’re crafting a stronger and more sustainable future for KILT. We invite you to join us in this exciting new phase as we continue to innovate, collaborate, and grow — powered by the community, for the community.

For more information on the proposal and to join the conversation, please visit: KILT Governance Proposal: https://kilt.polkassembly.network/referendum/45?tab=onChainInfo

About KILT Protocol

KILT is an identity blockchain for generating decentralized identifiers (DIDs) and verifiable credentials, enabling secure, practical identity solutions for enterprises and consumers. KILT brings the traditional process of trust in real-world credentials (passport, driver’s license) to the digital world while keeping data private and in possession of its owner.

KILT Community Update: End of Delegator Staking Rewards was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Lockstep

Australians becoming familiar with verifiable credentials

The Reserve Bank of Australia (RBA) recently released the 2024 Payments System Board Annual Report. It shows that Australian consumers are rapidly becoming familiar with verifiable credentials in the form of smart phone digital wallets. “The RBA continued to monitor the growth in mobile wallet card transactions… Payments using mobile wallets reached 39 per cent... The post Australians becoming f

The Reserve Bank of Australia (RBA) recently released the 2024 Payments System Board Annual Report. It shows that Australian consumers are rapidly becoming familiar with verifiable credentials in the form of smart phone digital wallets.

“The RBA continued to monitor the growth in mobile wallet card transactions… Payments using mobile wallets reached 39 per cent of card transactions in the June quarter 2024”.  Graph 2.1 from the report is reproduced above.

Whether it’s tap-to-pay at a merchant terminal or click-to-pay in a mobile app, most consumers are becoming comfortable with this digital experience. Thus, they are ready to click-to-present any important IDs in exactly the same way.

Consumers should be able to prove any important facts about themselves with the same security, speed and ease of use as they present their payment cards. So as Lockstep submitted in our submission on Australia’s 2030 cybersecurity strategy, the single most impactful thing that governments could do to make citizens safe online is simply give them the option of carrying their driver licences, Medicare cards, health IDs and social security numbers in standard digital wallets.

The post Australians becoming familiar with verifiable credentials appeared first on Lockstep.


KuppingerCole

Analyst's View: Identity and Access Governance

by Nitish Deshpande Identity and Access Governance concerns the access mechanisms and their relationships across IT systems. It is instrumental in monitoring and mitigating access-related risks. These risks most commonly include information theft and identity fraud through unauthorized changes and/or subversion of IT systems to facilitate illegal actions. Over recent years, security incidents have

by Nitish Deshpande

Identity and Access Governance concerns the access mechanisms and their relationships across IT systems. It is instrumental in monitoring and mitigating access-related risks. These risks most commonly include information theft and identity fraud through unauthorized changes and/or subversion of IT systems to facilitate illegal actions. Over recent years, security incidents have originated from poorly managed identities and proved the need to address these issues across all industry verticals.

Tuesday, 12. November 2024

KuppingerCole

Building Application Resilience Amidst Regulatory Shifts

In today’s fast-changing regulatory landscape, businesses must not only meet compliance standards but also ensure their applications are resilient against cyber threats. As regulations tighten and the risk environment evolves, organizations face growing pressure to safeguard their applications while staying compliant. The need to balance security with legal requirements has never been more critica

In today’s fast-changing regulatory landscape, businesses must not only meet compliance standards but also ensure their applications are resilient against cyber threats. As regulations tighten and the risk environment evolves, organizations face growing pressure to safeguard their applications while staying compliant. The need to balance security with legal requirements has never been more critical for IT professionals.

Modern technology plays a pivotal role in addressing these challenges. From AI-driven threat detection to advanced encryption techniques, innovative solutions can enhance both security and compliance. By leveraging these tools, businesses can create resilient applications that not only meet regulatory demands but also protect critical data from emerging threats.

Osman Celik, Research Analyst at KuppingerCole, will discuss the evolving regulatory compliance landscape, particularly focusing on the finance and public sectors. He will provide insights into recent developments in PCI-DSS, the EU AI Act, and other critical frameworks. Additionally, Osman will explore industry-specific best practices to help IT professionals navigate this complex environment.

Prakash Sinha, Senior Director & Technology Evangelist at Radware, will highlight actionable strategies to build resilience into your applications. He will discuss the practical implementation of advanced security measures, share case studies of successful organizations, and outline key steps to fortify applications against the growing landscape of cyber threats—all while maintaining compliance with regulatory standards.




Anonym

Can an Existing Digital Identity Wallet Leverage a Hardware Security Module to Meet New EU Standards?

Anonyome Labs will co-present a paper with Australia’s Queensland University of Technology (QUT) at the 8th Symposium on Distributed Ledger Technology in Brisbane, Australia from November 28–29, 2024. The paper, by Dr Paul Ashley, Ellen Schofield and George Mulhearn from Anonyome Labs, and Dr Gowri Ramachandran from QUT, considers how new European standards for the […] The post Can an Existing D

Anonyome Labs will co-present a paper with Australia’s Queensland University of Technology (QUT) at the 8th Symposium on Distributed Ledger Technology in Brisbane, Australia from November 28–29, 2024.

The paper, by Dr Paul Ashley, Ellen Schofield and George Mulhearn from Anonyome Labs, and Dr Gowri Ramachandran from QUT, considers how new European standards for the EU Digital Identity Wallet mandate support for a hardware security module (HSM) which can perform important cryptographic operations for very strong security and privacy protection for a user.

The paper outlines how an existing digital identity wallet can be enhanced to leverage an HSM, examining both inbuilt and external implementations, and presents a compatible matrix by analyzing the existing credential standards and different HSM cryptographic capabilities.

Watch this short video about the new European digital identity wallet.

The paper concludes that supporting the EU DI Wallet technical Architecture and Reference Framework (ARF)—common standards and technical specifications and common guidelines and best practices for a Digital Identity Framework—is feasible and practical for mobile digital identity wallet applications, but tradeoffs will occur in algorithmic compatibility, user experience, and performance.

We will publish the full paper after the symposium.

Anonyome Labs is a sponsor of the 8th Symposium on Distributed Ledger Technology. See the symposium program for more information.

Distributed ledger technology is an emerging technology, which provides the way to store and manage information in a distributed fashion. It enables the creation of decentralized crypto-currencies, smart contracts, eGovernance, supply chain management, eVoting and so on, over a network of computer systems without any human intervention.

Unprecedented reliability and security over other cryptographic schemes has expanded the application domains of blockchain including financial services, real estate, stock exchange, identity management, supply chain, and Internet of Things.

The symposium is a forum for researchers, business leaders and policy makers in this area to carefully analyze current systems or propose new solutions creating a scientific background for a solid development of innovative distributed ledger technology applications.


Explore Anonyome Labs’ digital identity wallet and reusable credentials solutions.

You might also like:

Aries VCX: Another Proof Point for Anonyome’s Commitment to Decentralized Identity  6 Facts About Digital Identities from One of the World’s Most-Streamed Cybersecurity Podcasts Gartner Confirms Anonyome Labs’ Solutions Offer Competitive Edge

The post Can an Existing Digital Identity Wallet Leverage a Hardware Security Module to Meet New EU Standards? appeared first on Anonyome Labs.


Indicio

Introducing Indicio Proven Digital Farming — a data management solution that frees farmers to do what they do best, farm

The post Introducing Indicio Proven Digital Farming — a data management solution that frees farmers to do what they do best, farm appeared first on Indicio.
A powerful, portable, privacy-preserving way to share and reuse authenticated data using Verifiable Credentials that saves farmers time, money, and tedium, while connecting stakeholders and unlocking value across the agriculture sector

SEATTLE, Nov. 6, 2024: With the launch of Indicio Proven® Digital Farming, authenticated data can now be shared instantly and reused endlessly across the agriculture value chain — suppliers, government agencies, financial services, and vendors —all while maintaining the farmer’s ownership of their data. 

Farming is data-intensive work, with multiple data sources, and regulatory, and market requirements. Each hour spent on data management takes farmers away from farming — with a measurable economic cost. To meet the challenges of data management in agriculture, Indicio developed a flexible and scalable ecosystem solution using Verifiable Credentials and decentralized identity. 

With a Verifiable Credential, a farmer can hold and manage authoritative, certified farm data from their phone and share it seamlessly with other stakeholders in the agricultural value chain all while maintaining data privacy and protection.

It’s an easy-to-implement solution that ensures farmers fully own their data. It eliminates the need for this data to be stored by third parties in order to be authenticated. Thanks to cryptography, the data shared from a credential cannot be tampered with — and the credential origin is always known. 

This means that data can be reused over and over again with the absolute certainty that those who need to see it can verify it as authentic. It gives farmers the power to be their own data platforms, while radically simplifying their data management burden. 

Benefits Farms and farmers hold and own their own data — not third parties. Capture authenticated data once in tamper-proof records that can be shared from a phone.  Consent to share data is built into privacy-by-design tech. Connect stakeholders across the agricultural value chain through seamless data sharing and authentication Simplify regulatory compliance Accelerate access to international markets Proven, award-winning success in New Zealand  Award winning

Indicio’s Digital Farming solution was first developed for Trust Alliance New Zealand (TANZ) , a nonprofit farming consortium. 

“Being able to quickly share data about their goods or emissions to these key relying parties provided a huge benefit to the farmers, saving them time, creating better connections between them and their customers, and reducing the amount of effort they have to spend filling out the same forms multiple times,” said Sharon Lyon-Mabbet Project Manager at TANZ. 

TANZ’s implementation has won a prestigious Constellation Research SuperNova Award for Digital Safety, Governance, Privacy, and Cybersecurity. This is the second time an Indicio customer has won a Constellation Award.

Learn more about the project here.

A simple solution to an annoying and costly problem

“Verifiable Credentials are the perfect data management tool for a sector that relies on connecting multiple data sources with multiple parties for multiple purposes,” said Heather Dahl, CEO of Indicio. “Farmers don’t want to spend hours and hours on data management, sending the same information to multiple agencies, suppliers, and vendors. And now they don’t have to. With Indicio Proven Digital Farming, we have a capture once, reuse often technology that gives farmers full control and ownership over their data. It’s a way to turn data from being an obstacle to being an opportunity to unlock value, because now it’s easy to share authenticated data in a frictionless way with those who need to use it.”

The farmer as their own digital platform

Decentralized identity and Verifiable Credentials allow farmers to hold and share all kinds of tamper-proof data that can be instantly authenticated by relying parties:

Farm borders Farm ownership Methane emissions Fertilizer application (soil nitrogen levels) Pesticide & herbicide usage Nutrient run-off Water management Implementation of food safety practices Records for contaminant testing Traceability information

Verifying software is simple to use and can be downloaded to a mobile device for instant in-the-field authentication.

What you get with Indicio Proven Digital Farming

We provide a complete solution that contains everything needed to get an entire data sharing ecosystem up and running fast, including digital wallet, mobile SDK, issuing, holding, and verifying software, hosting, support, constant updates, and even certified training. We can handle any customization for specialized use cases, and all our technology is built to meet current and emerging global decentralized identity standards, so you can be confident that your solution will work anywhere.

Indicio is the market-leader in decentralized identity and Verifiable Credential technology and has developed “government grade” digital identity and data sharing solutions for airlines, borders, banking and finance, health, and supply chains. 

Learn more about the solution at https://indicio.tech/digital-farming/, or contact our team to discuss ideas you have for a specifc project.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Introducing Indicio Proven Digital Farming — a data management solution that frees farmers to do what they do best, farm appeared first on Indicio.


Spruce Systems

Why We Build Digital Infrastructure in Rust

Memory-safe programming offers a safer, more secure future.

If you’re alive in 2024, you’re probably used to hearing a lot about cybercrime. Large hacks, such as thefts of customers’ personal information, seem nearly constant – and they’re only projected to accelerate in coming years.

Recent advances in software development tools, however, offer hope. In February, the White House Office of the National Cyber Director issued a memo encouraging the wider adoption of what are known as “memory-safe” programming languages. That shift could mitigate up to 70% of hacks, preventing attacks that are currently causing a shockingly large amount of economic damage.

SpruceID has been an early adopter of memory-safe programming since its founding in 2020 as part of our commitment to high standards of security. Just about all of our tools are built using the memory-safe language Rust. Read on to find out more about memory-safe programming, Rust – and why all software builders should be taking similar steps into a more secure future.

Death By a Thousand Memory Leaks

Hacks based on flawed memory management are a large part of the massive economic and social harm caused by hacking – what the Council on Foreign Relations has described as a “death by a thousand cuts.”  USAID estimated $8 trillion in economic damage from cybercrime worldwide in 2023. One analysis estimated that cyberattacks will cost the U.S. economy alone more than $350 billion this year. That’s more than 20 times what the U.S. federal government spends on feeding school kids.

Poor memory management is a common weakness in older but still widely-used programming languages like C and C++, and according to research by Google, memory is the root cause of roughly 70% of all system-level hacks. Very broadly, a program can be exploited when it loses track of a chunk of the short-term memory (RAM) that programs run on. Attackers can use uncontrolled or badly indexed memory to alter the intended behavior of a program. Spectre and Meltdown vulnerabilities, which exploit memory to inject malicious code, are still a threat years after their discovery.

Wider use of memory-safe programming languages is a system-wide way to address the ceaseless torrent of hacks. The White House notice (summarized here by Security Intelligence) follows a 2022 bulletin by the National Security Agency also encouraging the move towards memory-safe programming languages. It’s unusual for agencies like the NSA to issue specific software development advice, making this guidance particularly notable. 

The unusual push is justified because memory-safe programming presents the possibility of what might sound like a fantasy: dramatically reducing the prevalence of destructive hacks by attacking one of their root causes.

The Language of Choice for Secure and Reliable Solutions

SpruceID is committed to staying at the forefront of security standards. Our tools handle highly sensitive data and are trusted to verify its validity, often in high-security settings. With security as a top priority, we carefully design, develop, and deploy our solutions to meet these demands. That's why we build our secure applications in Rust, a programming language known for its memory safety and robustness. Its adoption by leading organizations highlights its suitability for building resilient, high-security systems, and we are glad to be part of this movement.

Rust is becoming increasingly recognized for its excellent design and is by far the most widely used memory-safe programming language. It has been integrated into critical components of Google, Linux, Windows, and Nvidia products. The February White House report can’t be seen as picking favorites, so it’s not explicit, but reading between the lines, it’s fairly clear that Rust is meant to be front and center for those mulling a path toward improved memory safety.

One of the more remarkable advantages of Rust, as Google reports, is that building new components with Rust provides security advantages even without re-writing or heavily modifying legacy codebases. That makes the transition far more efficient: Google began pushing Android development to memory-safe languages in 2019, and memory vulnerabilities have declined from more than 70% to just 24% of Android vulnerabilities in the years since - without overhauling existing code.

In November of last year, Microsoft announced that it was investing $10 million in improving developer tooling for Rust and integrating Rust into Windows and Azure environments. Microsoft also made a large contribution to the Rust Foundation, where SpruceID is also a member, and Microsoft engineers have said the Rust is mature enough to integrate into core components such as the OS kernel. Linux, the operating system that runs many industrial server systems, is also actively integrating Rust into its core architecture, shifting away from what devs consider “inherent weaknesses” in older languages.

While security is the headline, Rust does bring many other benefits. It leads to better performance in many circumstances, even in comparison to other modern languages like Go. Programmers also broadly consider it a pleasure to use: Rust is far and away the most “loved” programming language, according to a survey by Stack Overflow. Programmer Gregory Szorc has explained the appeal by describing Rust as a perfect mix of innovative ideas and user-friendliness. So an added benefit of Rust, and one we’ve definitely experienced at SpruceID, is that it makes it easier to attract and keep top coding talent.

One Important Piece of the Security Puzzle

While memory-safe programming languages like Rust are essential in reducing vulnerabilities, they’re only one component of a robust security program. At SpruceID, we recognize that creating secure systems goes beyond selecting a single language - it’s about designing, testing, and maintaining a multi-layered strategy for every stage of development and deployment.

Rust helps us uphold these high standards, but it’s integrated into a wider approach that includes rigorous protocols, continuous monitoring, and regular updates. Each of these components reinforces the security, reliability, and privacy that our users expect.

Rust is The Future

At SpruceID, we’re focused on building better identity systems, which are poised to become a more secure and more private system for managing our digital lives. Building on a secure foundation, and aiding the broader transition to memory-safe programming, is a natural extension of  SpruceID’s core mission.

This isn’t just about strong principles and good vibes, though - these recent government directives on memory safety are a strong signal that it’s the right strategic move, too. The White House sets guidelines for Federal contractors and procurement, so memory safety could become a requirement for those applications. Builders interested in working with the government should all be considering transitioning to memory-safe tools, and Rust is clearly at the top of that list.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Indicio

SITA, Idemia partner to build digital identity ecosystem for travel

PhocusWire The post SITA, Idemia partner to build digital identity ecosystem for travel appeared first on Indicio.

liminal (was OWI)

Market & Buyer’s Guide for Age Assurance 2024

The post Market & Buyer’s Guide for Age Assurance 2024 appeared first on Liminal.co.

Elliptic

Crypto regulatory affairs: Following the US elections, the industry anticipates regulatory clarity and move to pro-crypto stance

A sweep of the Presidency, Sentate, and House of Representatives by the Republican Party on the November 5 US elections has the US crypto industry confident that regulatory clarity is on the way, and that a period of aggressive regulatory enforcement will be ending.

A sweep of the Presidency, Sentate, and House of Representatives by the Republican Party on the November 5 US elections has the US crypto industry confident that regulatory clarity is on the way, and that a period of aggressive regulatory enforcement will be ending.


Datarella

Supply Chain Tracking in Action

This article is the fifth in a series of posts about how our probabilistic 360° supply chain tracking product, Track & Trust, works. We described how the system works at […] The post Supply Chain Tracking in Action appeared first on DATARELLA.

This article is the fifth in a series of posts about how our probabilistic 360° supply chain tracking product, Track & Trust, works. We described how the system works at a component level in our previous articles. Now, we dive into the challenging environment where our pilot operations have been executed. We selected Lebanon, one of the most difficult operational locations in the world, for our first pilot shipments to really prove the mettle of the system.

Aid Pioneers – an Ideal Pilot Partner

We have been working with our humanitarian partner Aid Pioneers for many months to prepare for these shipments. Aid Pioneers connects available resources from donors directly to recipient organizations. Through close collaboration with on the ground initiatives and the private sector, Aid Pioneers connects resources from donors directly with local organizations to foster sustainable, community-led change. They do this in places that need them most, making them a highly innovative humanitarian agency. They take an end-to-end approach to the supply chain, which we believe suits Track & Trust perfectly. Aid Pioneers needs to extend tracking of supplies beyond what typical supply chain tracking products can accomplish. We are helping them achieve this.

Supply Chain Tracking Challenges

Aid Pioneers‘ logistics environment provides a perfect showcase for what Track & Trust can do. When Aid Pioneers ships a container full of medical supplies or solar power generation equipment to a Lebanese clinic or school, they hire a freight forwarder to pick up the goods. The freight forwarder then organizes the delivery to a local port via semi-truck. After that, a freight forwarder loads the container onto a ship. The ship travels to a port of entry in Lebanon, and we track its progress using a typical tracking link. However, once the container clears customs, we take over. We actively track it and pick up where traditional systems stop working.

At this point we encounter tricky conditions. Aid Pioneers local lebanese partner Al-Manhaj breaks down containers into multiple pallets or depalletizes them. They do this before final delivery. After that they deliver goods to one location while others go to other locations at different times. To keep track of what was delivered when, we use probabilistic 360° supply chain tracking. We also developed strategies to deal with power and connectivity outages.

Outwitting Outages

These outages always happen at the wrong time so it’s important that the system is able to handle them. We do this with built in backup batteries and a battery management system. On top of that, the communications landscape is very challenging.  Sometimes there’s 4G connectivity and at other times there’s outages. Our mesh nodes can operate no matter, though, by caching incoming data locally. The nodes just wait until the data can be posted or handed off to other mesh nodes. This approach multiplies the effectiveness of our communications assets.  On top of that, we positioned one of our satellite uplinks at a local school. As a result, every event is (at the minimum) recorded and transmitted asynchronously – even when conditions are at their worst.

These logistics challenges are not unique to Aid Pioneers’ operations. However, they are particularly pronounced in the places where they work. We believe that if our system works there and brings value to freight forwarders and humanitarian organizations, it will work anywhere. As a result of this testing we’re confident in the capabilities of Track & Trust.

In our next post we’ll describe exactly how the our pilot operations went – and what the big value drivers are.

<<Previous Post

Next Post>>

The post Supply Chain Tracking in Action appeared first on DATARELLA.


KuppingerCole

cidaas Auth Manager

by Alejandro Leal In today's digital landscape, it is critical for every organization to have an agile and modern Identity and Access Management (IAM) solution. By providing complete visibility into who accesses what, when and how, modern IAM platforms enable organizations to better manage and mitigate risk. cidaas offers an IAM platform that is based on a microservices architecture with a core se

by Alejandro Leal

In today's digital landscape, it is critical for every organization to have an agile and modern Identity and Access Management (IAM) solution. By providing complete visibility into who accesses what, when and how, modern IAM platforms enable organizations to better manage and mitigate risk. cidaas offers an IAM platform that is based on a microservices architecture with a core set of services designed to address both customer and employee requirements. This architecture facilitates rapid updates and scalability, while ensuring the integration of user management and authentication processes.

Dock

The DOCK token migration to CHEQ is now live!

As you already know, Dock and cheqd are merging their tokens and blockchains to form a powerful alliance in the decentralized identity space. This partnership unites the blockchain capabilities of two industry leaders to accelerate the global adoption of decentralized identity and verifiable credentials, providing individuals and organizations worldwide with

As you already know, Dock and cheqd are merging their tokens and blockchains to form a powerful alliance in the decentralized identity space.

This partnership unites the blockchain capabilities of two industry leaders to accelerate the global adoption of decentralized identity and verifiable credentials, providing individuals and organizations worldwide with secure, trusted digital identities.

As part of this evolution, the Dock network will migrate its functionality and all tokens to the cheqd blockchain. This transition will enable Dock to leverage cheqd’s advanced infrastructure, delivering even greater value to both ecosystems.

During the migration, existing $DOCK tokens will be swapped for $CHEQ tokens at a conversion rate of 18.5178 $DOCK to 1 $CHEQ, ensuring a seamless and straightforward transition for all token holders.

How to migrate your DOCK tokens to CHEQ Before you start Ensure you have a compatible wallet for $CHEQ. If you don’t already have one, follow these instructions to set one up. We recommend using the Leap wallet, which has a browser extension and a mobile wallet on both Android and iOS, and can be easily connected during the migration process. There is another alternative which is to use the Keplr wallet. Update your wallet softwares to their latest versions. This ensures compatibility with the new system and reduces the likelihood of encountering bugs or issues during migration. Note that the migration must be done through the Dock browser-based wallet. If you use the Dock Wallet App or Nova Wallet, you can easily add your account to the Dock browser wallet by following these steps. If your $DOCK tokens are currently on an exchange, you’ll need to withdraw them to a Dock wallet to complete the token migration, as it can only be done from an address you own. Follow our guide on creating a Dock wallet account to get started. We are actively working with exchanges to allow them to handle the migration on behalf of users, and we’ll publish a list of participating exchanges as soon as it’s available. UPDATE 23/11/2024: KuCoin will support the migration of $DOCK tokens to $CHEQ. If you’re holding your $DOCK tokens on KuCoin, you don’t need to do a thing—KuCoin will handle the migration for you seamlessly. Read all the details here. Read and understand the migration's terms and conditions. Make sure you are familiar with the terms before proceeding. You can review them here. The migration service will only be available until March 15th, 2025. After this date, the migration will no longer be supported. Please ensure you complete the process before the deadline.

Migrating your DOCK tokens Access the migration page
Click here to visit the migration page and begin the process. Connect Leap or manually enter your cheqd account
Connecting the Leap wallet ensures that the tokens are sent to the cheqd account you control. If you do not see your Leap accounts in the dropdown, follow these steps to set up your Leap wallet for cheqd. Select your Dock account
If your account isn’t already added, follow these instructions to add it. Accept the Terms & Conditions
Once you've reviewed the T&Cs, click Submit to confirm your migration request.

The full balance of your Dock account will be migrated in a single transaction, partial amounts are not permitted. Once submitted, your $DOCK tokens will be burnt, and the converted $CHEQ tokens will be sent to your designated cheqd wallet using the swap ratio of 18.5178 $DOCK to 1 $CHEQ. 

The migration process typically takes 1-2 business days, after which your $CHEQ tokens will be available in your cheqd wallet. Please bear in mind that during the holiday season (mid-Dec 2024 to early Jan 2025) it might take a bit longer.

Please follow these steps carefully, and if you have any questions, feel free to reach out to our team at support@dock.io.

Note: If you need any transaction reports from the Dock blockchain for tax purposes, make sure to download them from our Subscan blockchain explorer before March 15, 2025.

The Future of Decentralized ID

Dock and cheqd will continue as independent companies serving distinct market sectors in unique ways. By merging their tokens, expertise, and strategic focus, they will drive their shared vision forward with unstoppable momentum.

This token merger is not just a change; it's a monumental leap forward. By merging the $DOCK token with $CHEQ, we are unlocking unprecedented opportunities for our community, positioning you at the cutting edge of decentralized identity innovation.

The future of decentralized digital identity is bright, and with your $CHEQ tokens, you'll be part of a dynamic, growing ecosystem that is set to lead the industry. 

Dock and cheqd will shape a world where secure, verifiable credentials are the norm, and your involvement is key to making this vision a reality. The journey ahead is filled with potential, and we are thrilled to have you with us as we pave the way for the next era of digital identity.


auth0

Authentication and Authorization Enhancements in .NET 9.0

With .NET 9.0, some interesting authentication and authorization features have been added to the platform. Let’s take an overview of them.
With .NET 9.0, some interesting authentication and authorization features have been added to the platform. Let’s take an overview of them.

Monday, 11. November 2024

1Kosmos BlockID

Vlog: How Can Remote Caller Verification Protect Your Organization From Social Engineering?

Mike Engle: Hi, everybody. My name is Mike Engle, co-founder and head of strategy here at 1Kosmos. I’m joined today by Jens Hinrichsen. Say hello, Jens. Jens Hinrichsen: Hello, everybody. Mike Engle: Jens is our head of sales here at 1Kosmos, spends a lot of time in the trenches. And today we’re here to talk … Continued The post Vlog: How Can Remote Caller Verification Protect Your Organization

Mike Engle:
Hi, everybody. My name is Mike Engle, co-founder and head of strategy here at 1Kosmos. I’m joined today by Jens Hinrichsen. Say hello, Jens.

Jens Hinrichsen:
Hello, everybody.

Mike Engle:
Jens is our head of sales here at 1Kosmos, spends a lot of time in the trenches. And today we’re here to talk about remote caller verification. We have an acronym for that, RCV. But Jens, would you mind giving your quick pitch on what RCV is for the folks out there?

Jens Hinrichsen:
Yeah, I would love to. And I think certainly also, Mike, with all the conversations that we’re both fortunate to have with a variety of organizations globally, please chime in with some of your own perspective as well. But I think remote caller verification, whether it is IT service desk for employees, contractors, other third parties that are interacting with an organization and have access to the inner sanctum, if you will, of an organization versus, say, contact center or call center. Where for years the industry has been working on solutions to mitigate fraud from a customer or outside facing standpoint, this is really about these emerging threat actor groups. Not even so much emerging, but Scattered Spider certainly has taken the cake recently in terms of being in the press most from MGM, Caesers, a host of other organizations where they have as a group socially engineered their way through the IT service desk of an organization.

So in the case of 1Kosmos, hi, I’m Mike Engle, I’m a co-founder. Service desk agent’s like, “Oh, my gosh, I got a co-founder on the call.” And if it’s not Mike and it’s a threat actor group, very charming, you name it, they can socially engineer their way in, get the credential reset, and then have Mike’s access to the company. So it is a big area of threat. It’s a big area of inefficiency also that organizations are trying to get better shored up. Mike, any other thoughts you have on that?

Mike Engle:
Yeah, so a lot of friends in the industry, I talk to them about this and they don’t have the right tools typically. So they’re using old, tired methods or no methods. They just turn it off because they can’t trust it. And an example would be secrets. What’s your employee ID? What was your date of hire? What was the amount of your last payroll deposits? Which I wouldn’t know that. So sometimes those are too hard and don’t work or they’re too easy to guess and anybody can use them. So social engineering has been around forever, but they’ve gotten really good at finding the information, the legacy ways that people have been using over time. What are some of the ways that they’re using now to get into help desks?

Jens Hinrichsen:
Well, it’s interesting, too. I think back to the point we made earlier from a fraud standpoint, I mean, there’s been social engineering going on for ages. Whatever that chain looks like, phishing, malware, getting information, and then pretending to be a customer of an organization, malicious actors are looking for economic gain and other impact for a variety of reasons. But where you can have big impact is when you’re able to infiltrate an organization. It’s one thing to steal $50,000 from a customer of an organization. It’s a big deal. You want to mitigate that, but as far as being able to get into the inner bowels of an organization’s IT stack moving laterally, whatever the case is, that is a huge area of focus these days.

So a lot of the, call it, the social engineering talent, the charms, I mean, Mike, you and I have even through different circles heard some of these calls and they’re … Wow, if I’m the service desk agent, yeah, I’m believing this person. You don’t have an ID for what reason or you don’t know this for whatever reason? Sure, of course. So I think it’s really been the same playbook focused on this avenue now. And again, it is really, really easy for these sophisticated threat actors to sound very believable, have core information that’s needed that would get a service desk agent to say, “Mr. Engle, co-founder of 1Kosmos, that’s fine that you don’t have this and this, but I’m going to issue a new credential to you right away. I want to make sure you’re happy.”

Mike Engle:
Right, and they may create a sense of urgency. I’m a doctor, I got a patient here at a table and I can’t unlock my stethoscope, whatever it is. So yeah, that’s a common tactic as well that we’ve seen them use. And then once they get that initial credential, they’re typically 50% of the way of getting into the core network and things go downhill from there. And so yeah, the traditional KBA, which you would think stands for knowledge based authentication.

Jens Hinrichsen:
Knowledge based authentication. Right.

Mike Engle:
We actually refer to it as known by anybody, KBA. So it really is close to useless. And whenever I opened a new financial services account and they pop up those five questions, what was the type of car you had when you were five years old or whatever, I run for the hills if I can. So what can we do about it? How does 1Kosmos, for example, mitigate this threat?

Jens Hinrichsen:
Yeah. And even, Mike, before we go there, and I think one of the examples, what’s one of the KBA examples you’ve used before? It’s like your grandmother’s shoe size when she was nine or something. Well, whatever the iteration is, before we even get into solution, I think some of the really interesting parts that we’ve gotten more intimate with is even the other ways that organizations are trying to address this. So KBA, sure, that’s one. Known by anybody, as you said. OTP. Hey, I’m going to push you an OTP. Well, we still don’t know it’s Mike. And then we’re also seeing a lot of organizations, not even necessarily just at the highest level of privilege, but even more broadly where it’s an escalation to the manager. And you do the math on that in terms of just sheer productivity loss and in some cases you might not still be actually verifying it’s that genuine user.

So there’s these kind of clunky ways and tools that we as an industry have been trying to address this. And so to your question, Mike, it’s like, well, gosh, what is a way that an organization can do this where it’s effectively automated? So somebody is still calling into the service desk, but you’re removing the onus of verification from the service desk agent because the reality is service desk agents are being asked to do so many things already and they’re always do it in this amount of time, get it faster, faster. So you don’t want to forsake quality, but how do you have a very easy process for both agent and user, whether genuine or a malicious actor, to undertake that then gives the credence that, yes, this is actually Mr. Engle calling in? And so there are a few ways to do it. One that really gives, I’ll say, the minimum viable baseline would be a one-time identity verification or identity proofing event where I call into the service desk and I’m pretending to be you.

And the service desk agent says, “Okay, Mr. Engle, I’m going to send you a link either to your phone, to your email address.” There are a variety of things that you have to take into consideration obviously in terms of companies that might not have employees be able to have phones or are they company owned, et cetera. Those are all things that you see and we navigate accordingly, but the very simple process of opening up a link, scanning the front and back of a driver’s license, a passport, some other government issued document, and then doing a matching selfie against the image that’s on that document. And what we can do with very high assurance is give a thumbs up or thumbs down. And all we would do is simply say the agent, “Yep, this is Mr. Engle,” or in my case, pretending to be you, “No, this is not.” And so that’s a really simple initial way to do it. The really exciting part, and this is what permeates the next generation, which is actually here now and gaining steam, is the user control.

That reusable identity of, hey, once I have verified myself, once I essentially have an identity wallet that I can then present wherever it’s needed that proves that I am like Engle and I don’t have to go back through the whole process of scanning something, selfie, et cetera. So the elegance is there. You get high assurance, quick and easy, reduces call center times. And then again, you’re removing that, again, onus on the service desk agent of having to be the one. And there are other companies, too, Mike, where it’s, “Hey, can you hold your ID up to the camera?” It’s hard enough to tell that they’re real when you’re holding them, much less over a camera.

Mike Engle:
Yeah. And when I hold my license up to a camera, now what’s the other person doing with that information? First of all, they can’t verify it. It’s too hard. You can’t see the little security features and then now I’ve just showed you my driver’s license number. That’s something you don’t want floating out there on a video call. So yeah, the privacy preserving aspects are really key. If you can assure the help desk and your remote callers, your remote employees, or customers that it’s safe, then they’ll trust it and feel good about using it as well. That’s a great point. Yeah, so I think we’ve about done it. I guess one last thing is how hard is it to implement a tool like identity-based biometric verification for a service desk?

Jens Hinrichsen:
Yeah. What’s the usual answer? Well, we could have had it in yesterday, so you got a couple of flavors. And I think the great thing for us as an industry is you can literally start as fast as you can start with, call it, a touchless integration where you’re simply calling out to an API. That link that we talked about earlier that gets sent to the user, that’s essentially a service. It’s a hosted service and you’re not having to replumb or do anything on day one within your organization. You can address the threat, make it a simpler process literally within a couple of weeks. And then the subsequent steps that I know we’ve observed with our customers is there are things that you can do to tighten some of the workflows, whether it’s ServiceNow or whatever the service desk system or backend might be.

But then that next step, and it can come pretty quickly, is the organization’s adoption and use of that reusable identity. And it’s a pretty powerful thing when we think about especially at the point of, say, onboarding. Whether it’s say HR onboarding, contract, or third-party onboarding, you’re doing that verification once. The user now owns it. You made a great point about privacy preservation. I mean, that’s what we’re all in the space for, right? It’s one thing to have a point in time, but you have to make sure it’s privacy preserving. But then also, let’s make it efficient for everybody. Do the verification once and then all you’re doing is you’re essentially authenticating into systems or doing high-risk transactions or whatever the case is after that.

Mike Engle:
Right, right. And you can’t implement something like this without uttering the words ROI, right?

Jens Hinrichsen:
Yeah.

Mike Engle:
You have the obvious security benefits, stop bad guys, but the user experience is actually better. And then an organization can have 100,000 calls into a help desk a year. It’s an average of 30% to 50% are password reset or identity related, so why not remove that and save those calls from even coming in? You can automate this, you can do it in a self-service password reset manner as well, SSPR. So yeah, a lot of reasons to do it.

Jens Hinrichsen:
Yeah. Well, no, and you’re right. And it’s fun to build these business cases alongside organizations because it’s not just a security risk mitigation. There are very direct, like you said, Mike, very direct savings, overall operating efficiencies. Even to the point where as an organization lifts its security posture, they’re getting better policy. Their cyber insurance policies are coming down or at least not going up as quickly as they might go, depending on what most of us are feeling in the industry. So that’s a great point, that this is a really a multi-pronged business case. And I think we’ve observed 10, 20, 30X return on an investment in even just the first year.

Mike Engle:
Yeah. Yeah, it’s a no brainer. So hopefully we’ll get the phone calls before the bad guys get in and not after, but either way …

Jens Hinrichsen:
Mike’s personal number is…

Mike Engle:
That’s right. Well, cool. Thanks so much for joining. It’s been fun chatting with you about this. Hopefully somebody out there will see it and will spark some ideas to make a difference in the world of cybersecurity.

Jens Hinrichsen:
Brilliant. Great chat, Mike.

Mike Engle:
Thank you.

The post Vlog: How Can Remote Caller Verification Protect Your Organization From Social Engineering? appeared first on 1Kosmos.


KuppingerCole

Synthetic Data

by Anne Bailey The term synthetic data stands for artificially generated data that closely replicate the statistical properties, patterns, and characteristics of the real data. This replication mimics reality without including actual information about individuals or entities. As such, it becomes a secure and privacy preserving alternative to using raw, sensitive, or proprietary data. This data is

by Anne Bailey

The term synthetic data stands for artificially generated data that closely replicate the statistical properties, patterns, and characteristics of the real data. This replication mimics reality without including actual information about individuals or entities. As such, it becomes a secure and privacy preserving alternative to using raw, sensitive, or proprietary data. This data is used in training, testing, validation, and analytics. Artificial intelligence (AI) uses advanced algorithms to generate these datasets, preserving the statistical integrity of original data sources without exposing private information.

Unified Endpoint Management: HP

by John Tolbert In the IT landscape, managing a diverse array of devices such as smartphones, tablets, laptops, and IoT devices presents significant challenges. Device discovery can be difficult due to the distributed and dispersed nature of work, especially in the post-pandemic Work From Anywhere (WFA) and Bring Your Own Device (BYOD) paradigms. After devices are discovered, IT teams face the tas

by John Tolbert

In the IT landscape, managing a diverse array of devices such as smartphones, tablets, laptops, and IoT devices presents significant challenges. Device discovery can be difficult due to the distributed and dispersed nature of work, especially in the post-pandemic Work From Anywhere (WFA) and Bring Your Own Device (BYOD) paradigms. After devices are discovered, IT teams face the task of efficiently managing and configuring these devices, ensuring that each one complies with organizational security policies. The following are some of the common challenges that organizations face with regard to managing computing endpoints.

Security Service Edge: Broadcom

by Mike Small Digital transformation and cloud-delivered services have led to a tectonic shift in how applications and users are distributed. Protecting sensitive resources of the increasingly distributed enterprise with a large mobile workforce has become a challenge that siloed security tools are not able to address effectively. In addition to the growing number of potential threat vectors, the

by Mike Small

Digital transformation and cloud-delivered services have led to a tectonic shift in how applications and users are distributed. Protecting sensitive resources of the increasingly distributed enterprise with a large mobile workforce has become a challenge that siloed security tools are not able to address effectively. In addition to the growing number of potential threat vectors, the very scope of corporate cybersecurity has grown immensely in recent years.

Digital Divide: The US-China Struggle for Cyberspace

by Alejandro Leal The end of history? In the early 1990s, as the Cold War receded into history, political theorists proclaimed the "end of history," suggesting a future dominated by liberal democratic values under a unipolar international system led by the United States. This period coincided with the rapid expansion of the Internet, which was envisioned as a tool to promote global connectivity

by Alejandro Leal

The end of history?

In the early 1990s, as the Cold War receded into history, political theorists proclaimed the "end of history," suggesting a future dominated by liberal democratic values under a unipolar international system led by the United States. This period coincided with the rapid expansion of the Internet, which was envisioned as a tool to promote global connectivity.

However, the ensuing decades have seen a shift toward a multipolar world, with rising powers such as China and regional blocs asserting their influence. This shift has fragmented both cyberspace and the global economy, with nations prioritizing national security over global interests, resulting in a cyber landscape characterized by sovereignty and divergent norms.

Cyberspace, often perceived as an abstract concept, is actually grounded in a robust architecture that encompasses both physical and software infrastructure. This includes undersea cables, terrestrial networks, satellites, and data centers, alongside essential protocols like TCP/IP that facilitate data transfer.

This infrastructure is central to modern geopolitics, emphasizing that control over data and management of information flows are now as strategically important as territorial dominance was in previous centuries. Modern geopolitical strategies are increasingly focused on establishing, defending, and expanding digital domains as much as physical ones.

Two tigers cannot share the same mountain

This can be illustrated, for example, by contrasting international commitments such as the "Declaration for the Future of the Internet," signed by over 60 governments, including the U.S. and EU, which promotes a vision of an open and secure Internet. In contrast, China's State Council's "Jointly Build a Community with a Shared Future in Cyberspace" reflects an alternative vision emphasizing digital sovereignty and state control, indicating a global divide in cyberspace governance and Internet freedom.

The strategic competition between the U.S. and China also extends into the uncharted depths of the ocean, centering on the undersea fiber-optic cables that carry more than 95% of intercontinental Internet traffic. These cables are essential for everything from consumer transactions to government communications. Recently, both major American tech companies and Chinese state-owned enterprises have tightened their control over these assets.

The submarine cable industry is a niche but critical sector that relies on a limited global fleet capable of laying and maintaining these cables. However, this lack of expertise sometimes forces Western governments to rely on foreign powers such as China for essential repairs, creating potential security vulnerabilities. Notably, China has strategically emphasized its role in the “maintenance” aspect, seeking to position itself as an indispensable player in the ongoing operation and upkeep of this vital infrastructure.

At the heart of this competition are semiconductor microchips, which are central to both civilian and military technologies. China's strategy to dominate this essential industry underlines its broader economic and political ambitions to supplant the U.S. as hegemon in the Asia-Pacific region and establish its own “sphere of influence”. This strategic competition is demonstrated by the tensions over Taiwan, a key center of semiconductor manufacturing, where Beijing and Washington's interests are sharply at odds.

Strategic Competition in the Digital Age

Global cyber conflicts and the economic impacts associated with them are reshaping international relations in profound ways. As nations vie for control over critical internet infrastructure and data flows, cyberspace has become a new domain of strategic competition, paralleling traditional conflicts over maritime and land resources. The stakes are high, as control over AI technologies and the cyber realm carries significant implications for national security, military advantage, and technological edge.

Unfortunately, a fragmented international system and divided cyberspace hinder the global cooperation needed to tackle pressing challenges such as climate change and the governance of AI. When the world's nations are divided, their collective power to address these universal issues is significantly weakened. As another Chinese proverb wisely states: "A single tree does not make a forest.”

Join us in December in Frankfurt at our cyberevolution conference, where we will continue to discuss the cyber threat landscape and its economic impact.

See some of our other articles and reports:

Software Supply Chain Security Cyber Risks from China: How Contract Negotiations Can Mitigate IT Risks Beyond Boundaries: The Geopolitics of Cyberspace

Security Orchestration, Automation and Response (SOAR)

by Alejandro Leal As the number and sophistication of cyberattacks have increased over the years, it has become clear that traditional cybersecurity methods and tools are increasingly inadequate to address these evolving threats. Large organizations, whether part of critical infrastructure or not, must be able to detect and respond to incidents by monitoring security and analyzing real-time events

by Alejandro Leal

As the number and sophistication of cyberattacks have increased over the years, it has become clear that traditional cybersecurity methods and tools are increasingly inadequate to address these evolving threats. Large organizations, whether part of critical infrastructure or not, must be able to detect and respond to incidents by monitoring security and analyzing real-time events. To stay secure and compliant, organizations need to actively seek out new ways to assess and respond to cyber threats while providing Security Operations Center (SOC) analysts with the right tools.

Sunday, 10. November 2024

KuppingerCole

Digital Sovereignty or Global Connectivity? The US-China Cyberspace Divide

In this episode, host Matthias welcomes Research Analyst Alejandro Leal to explore the evolving landscape of cyber warfare. Drawing from William Gibson's sci-fi classic "Neuromancer," they discuss how the digital battleground is now a critical arena for nations, corporations, and cyber criminals. Their conversation covers the economic consequences of cyber attacks, the strategic importance of un

In this episode, host Matthias welcomes Research Analyst Alejandro Leal to explore the evolving landscape of cyber warfare. Drawing from William Gibson's sci-fi classic "Neuromancer," they discuss how the digital battleground is now a critical arena for nations, corporations, and cyber criminals.

Their conversation covers the economic consequences of cyber attacks, the strategic importance of undersea fiber optic cables, and the role of semiconductor manufacturing in global tensions. Learn how different national perspectives on cyberspace shape security measures and why international cooperation is essential in addressing challenges like AI governance and climate change.

Join Matthias and Alejandro as they dissect the current state of cyber warfare and its implications for global security. Don't forget to leave your comments and questions below!

Alejandro's Blog: https://www.kuppingercole.com/events/cyberevolution2024/blog/us-china-struggle-for-cyberspace



Friday, 08. November 2024

Extrimian

A Leap Forward in Decentralized Digital Identity

The Buenos Aires City Government has embarked on a transformative journey by integrating QuarkID into its miBA platform, showcasing a significant leap in decentralized digital identity. This initiative not only enhances privacy and security for citizens but also marks a pivotal moment in digital governance. The Role of Key Players Extrimian Extrimian is a key […] The post A Leap Forward in Decen

The Buenos Aires City Government has embarked on a transformative journey by integrating QuarkID into its miBA platform, showcasing a significant leap in decentralized digital identity. This initiative not only enhances privacy and security for citizens but also marks a pivotal moment in digital governance.

The Role of Key Players Extrimian

Extrimian is a key participant, plus a technical implementer of QuarkID protocol and used its IDConnect product to facilitate the integration of QuarkID into miBA. This effort underscores Extrimian’s commitment to advancing decentralized identity solutions.

Government of Buenos Aires

The Buenos Aires City Government (GCBA) app miBA has been crucial in adopting and integrating digital solutions that improve city management and citizen services, enhancing both efficiency and transparency.

The goal of this initiative is to give 3.6 million residents of Buenos Aires greater control over their personal information.

zkSync

Powered by zkSync, QuarkID leverages advanced zero-knowledge proofs to ensure secure and private blockchain transactions, significantly enhancing data protection on the miBA platform.

IT Rock

IT Rock has played an instrumental role in seamlessly integrating QuarkID with miBA, ensuring that the technological deployment aligns with the city’s needs for digital identity solutions.

QuarkID

As a protocol integrated into miBA, QuarkID stands at the forefront of this initiative, enabling the secure and efficient verification of digital identities across Buenos Aires.

What is the purpose and use of miBA?

miBA is a digital platform by the Government of Buenos Aires that centralizes access to various city services using advanced technologies like blockchain. This platform allows citizens to securely manage documents and services, enhancing privacy and efficiency. The integration of decentralized identity solutions like QuarkID into miBA exemplifies a significant advancement in providing secure and user-focused digital governance.

Expanding Digital Identity in Buenos Aires

This project by the City of Buenos Aires marks a global milestone as the first city to implement decentralized identity technology on a large scale, issuing verifiable credentials to its entire population. This initiative not only advances the digitalization of public services but also sets a new standard in protecting citizens’ data privacy and security.

The integration has expanded to include a variety of 32 verifiable credentials types, such as Birth and Marriage certificates, Student IDs,Gross Income Tax Certificates, Salary Receipts GCBA, Employee Credential GCBA, and more. This expansion not only simplifies the management of personal documents but also enhances the interoperability of digital credentials across various services.

Documentation and Process Integration

This integration process, managed in collaboration with IT Rock and Extrimian, exemplifies a streamlined approach to adopting IDConnect. This process is pivotal for cities and businesses looking to implement similar decentralized identity solutions.

Source: https://buenosaires.gob.ar/innovacionytransformaciondigital/miba-con-tecnologia-quarkid-la-ciudad-de-buenos-aires-incorporo Voices from the Ground

Read some quotes from IT Rock and GCBA representatives that provide personal insights into the project’s impact and their experiences, emphasizing the collaborative effort required to modernize public services.

Extrimian’s CEO, Guillermo Villanueva, shares his thoughts on IDConnect’s role in this integration:

“With Extrimian IDConnect, we are laying the foundations for a more secure, private and self-managed exchange of information, and building a world with more trust and less friction. 

Our product facilitated the process of miBA-QuarkID integration by IT Rock thanks to the simplicity of our product and the support of Extrimian’s team.”

From the Secretary of Innovation and Digital Transformation of the Government of the City of Buenos Aires side, Juan Pablo Migliavacca – Director General de Ciudadanía Digital en Secretaría de Innovación y Transformación Digital del GCBA, shares that:

“The implementation of IDConnect was critical to quickly, securely, and efficiently connect our miBA system with the QuarkID protocol. Thanks to this integration, and continued work with the Extrimian team, we simplified and improved citizens’ access to their data in a reliable, transparent, and secure way, in a completely digital, frictionless environment.”

Conclusion

The integration of QuarkID into Buenos Aires’ miBA platform is more than a technological upgrade; It is a strategic enhancement to the city’s digital infrastructure, setting a benchmark for other cities worldwide.

For further details on the decentralized digital identity movement and Extrimian’s solutions, visit our Use Cases page.

This blog post aims to provide a comprehensive overview of the transformative integration of QuarkID with miBA, illustrating the synergy between technology providers and governmental vision in advancing digital identity solutions. 

For more detailed insights and developments, visit the Extrimian website and the Extrimian Academy.

Download miBA

IOS Android

Download QuarkID

IOS Andriod

The post A Leap Forward in Decentralized Digital Identity first appeared on Extrimian.


HYPR

HYPR Partners With Yubikey: Bio Series Multi-Protocol Edition

Today Yubico announced the general availability of its YubiKey Bio - Multi-protocol Edition, which supports biometric authentication for FIDO and Smart Card/PIV protocols. Like other YubiKey Bio Series, the new multi-protocol keys incorporate a fingerprint sensor, enabling secure, convenient biometric and PIN-based passwordless login across devices and platforms. The multi-protocol keys

Today Yubico announced the general availability of its YubiKey Bio - Multi-protocol Edition, which supports biometric authentication for FIDO and Smart Card/PIV protocols. Like other YubiKey Bio Series, the new multi-protocol keys incorporate a fingerprint sensor, enabling secure, convenient biometric and PIN-based passwordless login across devices and platforms. The multi-protocol keys, however, offer additional flexibility for enterprises, especially when combined with the HYPR platform.

"By combining Yubico's YubiKey Bio Series with HYPR's advanced solutions, organizations can effortlessly transition to a fully passwordless environment," said Jeff Wallace, SVP Product at Yubico. “This partnership not only enhances biometric authentication but also streamlines the process for desktop logins and strengthens phishing-resistant capabilities. With features like single-step YubiKey fingerprint setup for both web and workstation authentication, centralized credential management, and flexible authentication methods, we empower users to manage their security with confidence, even in sensitive environments.”

HYPR Plus YubiKey Bio — Multi-protocol Edition 

HYPR has worked closely with Yubico for years to bring flexible, phishing-resistant security to businesses around the world. The YubiKey Bio – Multi-protocol Edition is another step towards fully phishing-resistant, passwordless adoption and HYPR is proud to be Yubico’s sole featured partner.

Accelerate Passwordless Strategy

Available in both USB-A and USB-C form factors, the new multi-protocol YubiKeys support modern FIDO and Smart Card/PIV protocols, providing phishing-resistant login for desktops and web applications, across both legacy on-premises and modern cloud environments. Our joint solution makes it easy to provision and roll out the multi-protocol security keys, bringing enterprises the most versatile secure, hardware-based and software-based passwordless biometric authentication on the market.

Make Teams More Productive

The new biokeys provide near-instant login using fast, secure biometrics instead of PINs. Seamless desktop to web access removes extra authentication steps without compromising security.

Simplify YubiKey Onboarding and Management

HYPR provides choice to admins and flexibility for end users. Admins may enable users to start with a new YubiKey out of the box free of any pre-enrolled certificates. Users can enroll their YubiKeys in a single step click-through with the HYPR Passwordless client.



Users can also easily manage their security keys for lifecycle events such as unpairing, changing the fingerprint, resetting and more through the HYPR application. Administrators can also centrally manage user passwordless access through the HYPR Control Center.


YubiKey Login Flow With HYPR


Product Highlights Desktop login on Microsoft Windows using Smart Card/PIV with fingerprint Web authentication with FIDO2/WebAuthn and FIDO U2F using the same biometrics as desktop login Single-step enrollment for workstation and web using the HYPR application and no pre-enrolled certificates required Users can centrally manage credentials through the HYPR application Flexibility of authentication methods for various use cases, including account recovery and shared workstations

The YubiKey Bio - Multi-protocol edition is available globally through YubiKey as a Service. Learn more about the HYPR and YubiKey integration.

To see HYPR and the new YubiKey Bio - Multi-protocol Edition in action, schedule a demo.

 


Finicity

Simplify and Speed Up Customization with Mastercard’s New Customize Connect Editor 

Mastercard Open Banking is transforming the way businesses tailor customer experiences with the launch of Customize Connect, a no-code editor that makes customizing Connect experiences faster, simpler, and fully in… The post Simplify and Speed Up Customization with Mastercard’s New Customize Connect Editor  appeared first on Finicity.

Mastercard Open Banking is transforming the way businesses tailor customer experiences with the launch of Customize Connect, a no-code editor that makes customizing Connect experiences faster, simpler, and fully in your control. Available through the Client Hub portal, this powerful new tool allows clients to easily personalize their Connect experiences without needing to rely on Mastercard’s support teams. 

Customize Connect: Empowering Businesses to Optimize Their Customer Journeys 

Customize Connect puts clients in the driver’s seat, offering an intuitive, self-service interface that allows clients to adjust key elements of the Connect experience—whether for testing or production—using just one simple editor. With real-time validation, businesses can rapidly iterate and deploy updates, enhancing the way their customers securely link accounts. 

Now, businesses can manage their Connect experiences independently, from onboarding new experiences to fine-tuning existing ones, all without the need for extensive technical knowledge. It’s all about giving clients the ability to quickly adapt and scale their offerings based on customer needs. 

Key Customization Features  Branding Flexibility: Customize Connect makes it easy to adjust the look and feel of the Connect experience to match your brand identity. Upload logos, match accent colors, and ensure seamless integration with the rest of your user interface for a consistent experience.  Financial Institution Customization: Clients can tailor the financial institutions displayed to end users, ensuring they see the banks they’re most likely to use. With the ability to customize up to 8 FIs, businesses can simplify authentication by presenting the most relevant options.  Streamlined Account Selection: Whether your customers are selecting one or multiple accounts, Customize Connect allows you to refine the experience by controlling which account types are available for selection. This is especially useful in payment-focused experiences, where you may only want to show checking or payment accounts.  Real-Time Testing & Validation: With the ability to make changes on-the-fly, businesses can validate their customizations in real time, reducing the need for lengthy testing periods and ensuring smooth deployment.  Seamless Integration into Your Workflow 

Customize Connect is integrated directly into the Client Hub portal, making it easy to manage all your settings in one place. For more technical users, access it through Mastercard Developers to incorporate it into your existing projects. Whether you’re adjusting a live production experience or testing new options, the process is quick, simple, and completely within your control. 

Learn More 

For a full walkthrough of how to use Customize Connect, visit Mastercard Developers for detailed documentation or watch the quick demo video below to see the tool in action. 

With Customize Connect, Mastercard Open Banking empowers businesses to create better and tailored customer experiences on their terms. 

The post Simplify and Speed Up Customization with Mastercard’s New Customize Connect Editor  appeared first on Finicity.


auth0

What Are OAuth Pushed Authorization Requests (PAR)?

Learn what Pushed Authorization Requests are and when to use them to strengthen the security of your OAuth 2.0 and OpenID Connect-based applications.
Learn what Pushed Authorization Requests are and when to use them to strengthen the security of your OAuth 2.0 and OpenID Connect-based applications.

Datarella

Confidential Computing for Industry 4.0

With the Cosmic-X project nearing its conclusion, it is finally time to lift the curtain on the blockchain solution that Datarella has built over the last two years to enable […] The post Confidential Computing for Industry 4.0 appeared first on DATARELLA.

With the Cosmic-X project nearing its conclusion, it is finally time to lift the curtain on the blockchain solution that Datarella has built over the last two years to enable confidential computing and data sharing in Industry 4.0. In this first entry of a series of technical posts about designing, implementing, and integrating an edge-to-cloud blockchain solution, we discuss the evaluation process for selecting a suitable blockchain platform for Cosmic-X and how that platform operates on a protocol level to provide an open, transparent, and secure infrastructure for industrial use cases.

Evaluating Blockchain Platforms

Today, many different blockchain platforms exist, but their suitability for industrial use cases remains specific or, at times, limited. To achieve the best match between the requirements of Cosmic-X and the possibilities of blockchain technologies, the team conducted an extensive evaluation process. This evaluation compared both private and public blockchain platforms based on security, privacy, scalability, and interoperability.

Current-generation blockchain platforms predominantly perform well in security and scalability, yet privacy and interoperability often fall short. To achieve privacy in industrial scenarios like Cosmic-X, organizations have almost exclusively used private or consortium blockchains such as Hyperledger Fabric in the past. However, these approaches inherently involve high infrastructure costs for the operating parties, as well as centralization and limited interoperability. In contrast, public blockchains offer resilience, cost efficiency, and a degree of interoperability. Though only recently have they started focusing on privacy and data protection. Blockchain protocols with confidential computing capabilities remain relatively new and untested. Nevertheless, when weighing the advantages and disadvantages of the two approaches, a privacy-focused public network emerges as the preferred solution in an industrial context.

For a public network to meet Cosmic-X’s privacy and data protection requirements, it must support the multi-tenancy paradigm. Multi-tenancy enables a single instance of a software application to serve multiple clients while ensuring logical isolation. Different clients share an underlying infrastructure, which optimizes resource use and reduces infrastructure costs. Further, it enhances efficiency in data access, management, and collaborative data sharing.

Through this evaluation, the Cosmos-based Secret Network emerged as the blockchain platform best suited for Cosmic-X. The Secret Network functions as a public blockchain specifically developed for confidential computing. By combining established encryption techniques with trusted execution environments, it provides so-called Secret Contracts. This type of smart contract establishes consensus on computation without disclosing incoming or outgoing data. Integrated access control mechanisms enable third-party access and create an auditable processing chain. Thus, the Secret Network satisfies the need for multi-tenancy capability while retaining all the benefits of a public network.

How the Secret Network Works

The Secret Network leverages Intel Software Guard Extensions (Intel SGX) to create Trusted Execution Environments (TEE) that enable Secret Contracts. These smart contracts, based on the CosmWasm framework, allow for fully private computation of data. Outside a TEE, the transaction payloads and the network’s current state are encrypted at all times. Only the data owner and an authorized third party can decrypt and view data inputs and outputs. A combination of symmetric and asymmetric encryption schemes—ECDH (x25519), HKDF-SHA256, and AES-128-SIV—achieves this end-to-end encryption. Each validator in the network must run an Intel SGX-compatible CPU and instantiate a TEE that follows the network’s rules.

When an encrypted transaction arrives in the shared mempool of the network, a validator forwards it to their TEE, where a shared secret is derived and used to decrypt the transaction. The WASMI runtime then processes the plaintext input. Finally, the validator re-encrypts the updated contract state and broadcasts it to the network through a block proposal. If over two-thirds of the current network voting power agree on the result, the network appends the proposed block to the Secret Network blockchain.

For access control, the Secret Network offers Viewing Keys and Permits. A Viewing Key acts as an encrypted password that grants a third party permanent access to data related to a specific smart contract and private key. A Permit allows a more granular approach, restricting viewing access to specific parts of data for a set period. Consequently, despite its encrypted nature, the network remains fully auditable.

In the next post, we’ll explore how we leverage the Secret Network to secure machine data integrity directly from its point of origin to its consumption by a Machine Learning Model.

The post Confidential Computing for Industry 4.0 appeared first on DATARELLA.


SelfKey

SingularityDAO, SelfKey and Cogito Finance Token-Holders Approve Merger to Form Singularity Finance

SingularityDAO, SelfKey, and Cogito Finance have agreed to form Singularity Finance after the communities approved the merger. SDAO and KEY token-holders voted overwhelmingly in favor of the proposal.

SingularityDAO, SelfKey, and Cogito Finance have agreed to form Singularity Finance after the communities approved the merger. SDAO and KEY token-holders voted overwhelmingly in favor of the proposal.


Dock

The Port of Bridgetown Accelerates Vessel Clearance with Dock’s Verifiable Credential Technology

Zug, Switzerland – 8 November, 2024 – Barbados Port Inc., the state-owned entity that manages the Port of Bridgetown, has integrated Dock's Verifiable Credential technology into their Maritime Single Window, to revolutionize their vessel clearance processes. This cutting-edge solution enables the Port of Bridgetown to expedite vessel clearance

Zug, Switzerland – 8 November, 2024 – Barbados Port Inc., the state-owned entity that manages the Port of Bridgetown, has integrated Dock's Verifiable Credential technology into their Maritime Single Window, to revolutionize their vessel clearance processes. This cutting-edge solution enables the Port of Bridgetown to expedite vessel clearance for both arriving and departing ships, while ensuring the integrity of credentials through tamper-proof, verifiable data. This integration enhances efficiency, security, and trust in the port’s clearance procedures.

Full article: https://www.dock.io/post/port-of-bridgetown-accelerates-vessel-clearance-with-docks-verifiable-credential-technology


Tokeny Solutions

ERC-3643: The Motherboard for Composable Tokenized Assets

The post ERC-3643: The Motherboard for Composable Tokenized Assets appeared first on Tokeny.

Product Focus

ERC-3643: The Motherboard for Composable Tokenized Assets

This content is taken from the monthly Product Focus newsletter in November 2024.

“What token standard does your platform support?” This is a question we hear often. As a regular reader of our newsletter, you might think, “Tokeny? They’re an ERC-3643 platform.” But that’s only part of the story.

Think of ERC-3643 as a Lego motherboard. It’s the fundamental base, the piece that holds everything else all together. The real magic happens when you start adding multiple smart contract blocks. What makes its composability powerful is the ability to reuse existing and proven smart contracts.

Here are a few of the most common “add-on blocks” our clients add to their tokenized assets:

Smart contracts ensure compliance: Compliance contracts make sure that only approved identities can hold tokens. They also set rules for when and how tokens can be transferred, blocking any unauthorized moves. Smart contracts enrich asset onchain data: Asset identity contracts let you add data to assets, like ISIN, LEI, net asset value (NAV), and ESG ratings, making it easy for other platforms, such as distributors, to access this information quickly. Smart contracts enable distribution: Distribution contracts control where the tokens can be distributed. In addition, Delivery vs. Delivery (DvD) contracts can automate buying and selling. If all requirements are met, DvD swaps will happen, without counterparty risks. Smart contracts automate corporate actions: Corporate action contracts handle tasks like paying dividends or coupons, making middle and back office operations faster, smoother, and safer.

ERC-3643 isn’t here to compete with other token standards, it’s designed to work alongside them, offering composability and complementing their functionality. We act as a smart contract factory to ensure the smooth deployment and management of all smart contracts associated with tokens. The future of onchain finance is composable and interoperable, we are passionate about building products to achieve that vision.

Please do not hesitate to contact us if you have any questions regarding this topic.

P.S. What is more exciting is that this week, ERC-3643 was recognized as the official standard in Project Guardian by the Monetary Authority of Singapore (MAS) for ensuring compliance in tokenized debt instruments and funds. Check out more details here.

Joachim Lebrun Head of Blockchain Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs ERC-3643: The Motherboard for Composable Tokenized Assets 8 November 2024 How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance 20 September 2024 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead 23 August 2024 The Journey to Becoming the Leading Onchain Finance Operating System 19 July 2024 Streamline On-chain Compliance: Configure and Customize Anytime 3 June 2024 Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post ERC-3643: The Motherboard for Composable Tokenized Assets appeared first on Tokeny.


KuppingerCole

Synthetic Data for Security and Privacy

by Anne Bailey This report provides an overview of the Synthetic Data market and a compass to help you find a solution that best meets your needs. It examines solutions that generate datasets that closely replicate the statistical properties, patterns, and characteristics of real and production data. It provides an assessment of the capabilities of these solutions to meet the needs of all organiza

by Anne Bailey

This report provides an overview of the Synthetic Data market and a compass to help you find a solution that best meets your needs. It examines solutions that generate datasets that closely replicate the statistical properties, patterns, and characteristics of real and production data. It provides an assessment of the capabilities of these solutions to meet the needs of all organizations to generate and work with synthetic data.

ShareRing

A revolutionary way to protect personal and corporate data using Google Cloud and ShareRing.

In our daily lives we are regularly asked to provide personal details, and in many instances, we cannot secure a service or product unless we do so. This may involve a simple request to provide proof of identity, or a much more detailed one, perhaps requiring verification. This can be time-consuming, and often a fraught ... Read more The post A revolutionary way to protect personal and corporate

In our daily lives we are regularly asked to provide personal details, and in many instances, we cannot secure a service or product unless we do so. This may involve a simple request to provide proof of identity, or a much more detailed one, perhaps requiring verification. This can be time-consuming, and often a fraught process, and there is always concern for the security of that data. While online privacy concerns are at an all-time high, organizations increasingly store sensitive information digitally, in centralized databases.

Global regulators continue to evolve laws and regulations accompanied by outsized penalties for companies that fail to comply with them. The annual cost of cybersecurity crime to Australia alone is estimated to be in the range of $29 billion to $30 billion. A 2023 KPMG report estimated the total cost at $29 billion per year, with direct costs to businesses accounting for a significant portion. The Australian Cyber Security Centre’s 2022-23 Cyber Threat Report highlighted a 14% increase in the average cost of cybercrime per report compared to the previous year.

Increased regulation and penalties follow the foundation set by other international privacy legislation, such as Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). The maximum penalties for non-compliance have been significantly increased. The penalty for a serious privacy breach up until late 2022 was just $2.22 million. Now, businesses can be charged with the greater of: 

$50 million three times the value of any benefit obtained through the misuse of information 30 percent of the company’s adjusted turnover in the relevant period

These penalties are a compelling reason alone for businesses to improve the way they protect their client’s data. This is just the beginning — businesses need to understand their data obligations and where needed, implement new, compliant processes.

Read on to explore why the combination of Google Cloud Platform, and ShareRing’s digital identity platform, are a revolutionary approach to personal privacy and business protection.

The data security arms race

More-and-more the transfer of our personal details is done digitally, and the risk of someone obtaining those personal details is growing exponentially. We are all aware of the growing risk of identity theft and other exploits that compromise our personal information. A centralized store of identity data is an irresistible high risk data source  to cyber criminals and threat actors.

Digital Identity as a Service

ShareRing Link, is a decentralized public infrastructure (DePIN) solution that leverages Google Cloud for its core infrastructure. ShareRing’s ecosystem is architected to ensure minimal personal data is shared, and importantly is not stored centrally. ShareRing Link is a business system that enables a user to share select personal information from an encrypted Vault to a businesses’ backend system, such as KYC information to a financial institution, or age verification to a licensed merchant, for alcohol sales, via a zero-knowledge proof function.

ShareRing Me, a digital identity app available on Android, uses Blockchain technology to collect and store verified identity data in an immutable, reusable, self sovereign Vault, a “Digital Me”, on the user’s personal smart device. At all times the User controls who they choose to share their data with from their advice. ShareRing Me also gives the user the ability to backup the heavily encrypted Vault file onto their personal Google Cloud, to ensure no data is lost.

Privacy and Data Segregation with Google Cloud

Google Cloud Platform is quickly becoming a leader in business and enterprise cloud computing worldwide. This is, in large part, due to their “security by design, security by default” stance, underpinning a comprehensive and industry-leading approach to data protection.

Data Segregation:

Google Cloud customer data is siloed, which reduces the attack vectors. This is driven by their self imposed objectives to protect customer data and security, as well as the need to adhere to increased regulatory compliance requirements globally.

Google Cloud uses Logical Isolation mechanisms inherent to virtualization technology to create isolated virtual environments for each customer, ensuring that their data and applications are not directly accessible to others.  ShareRing Self-Sovereignty: ShareRing Me uses a decentralized storage model to keep verified and encrypted personal data on the User’s personal smart device. Data Encryption and Access Controls: Encryption: Google Cloud has built-in encryption capabilities, such as Cloud KMS (Key Management Service) to encrypt data at rest and in transit. ShareRing Smart Contracts: ShareRing uses smart contracts to automate and enforce access rules based on predefined conditions, ensuring that only authorized parties can access. Data is secured across multiple nodes in ShareRing blockchain, making it immutable to breach or tampering. IAM and RBAC:  Google Cloud Identity and Access Management (IAM) is used to implement granular access controls and permissions for different users and roles. In addition, Google Cloud  also uses role-based access control (RBAC) and network segmentation to restrict access to customer data based on user permissions and network boundaries. Storage: While Google Cloud  can provide additional storage capacity, it could be used in conjunction with ShareRing to offer redundancy and disaster recovery of the encrypted Vault. Strong technology intersect

ShareRing Founder, Tim Bos, stated – “Google Cloud’s commitment to customer data protection is a significant factor in why we chose to partner with them. Our technologies and philosophies intersect seamlessly. The extensive and industry-leading security controls Google Cloud  provides, and ShareRing’s self sovereign identity solution, together are a much needed evolution in privacy, in a world where personal information is as precious, or moreso, as your other assets”.

Assurance of best practices in a combined identity solution

Both ShareRing and Google Cloud  undergo regular audits and certifications to ensure compliance with various security standards, such as ISO 27001. Digital Identities are also becoming increasingly regulated. ShareRing is certified against the UK’s Digital Identity and Attributes Trust Framework  (DIATF) and is seeking accreditation against similar frameworks in the EU (eIDAS 2)  and Australia Digital ID framework as they come into play.

Business inquiries: 

Ryan Bessemer, ShareRing Global

+61 403 300 442 

ryan@sharering.network 

About ShareRing

ShareRing Global stands as the only digital identity business certified with ISO27001 Information Security Management certification, as well as a DIATF-certified provider in the UK.  Our suite of identity verification technologies transforms online interactions, ensuring they are 

safer, faster and easier. You choose what you share with ShareRing.

 www.sharering.network 

About Google Cloud Platform

Google Cloud Platform (GCP) is a suite of cloud computing services offered by Google that provides a series of modular cloud services including computing, data storagedata analytics, and machine learning, alongside a set of management tools.

cloud.google.com 

The post A revolutionary way to protect personal and corporate data using Google Cloud and ShareRing. appeared first on ShareRing.


IdRamp

SailPoint Account Recovery Using CLEAR Identity Verification

IdRamp has partnered with SailPoint and CLEAR to transform account recovery through advanced Identity Verification (IDV). The post SailPoint Account Recovery Using CLEAR Identity Verification first appeared on Identity Verification Orchestration.

IdRamp has partnered with SailPoint and CLEAR to transform account recovery through advanced Identity Verification (IDV).

The post SailPoint Account Recovery Using CLEAR Identity Verification first appeared on Identity Verification Orchestration.

Thursday, 07. November 2024

KuppingerCole

Overcoming the Challenges of MFA and a Passwordless Future

Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations. Modern technology offers promising solutions to these authentication challenges.

Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations.

Modern technology offers promising solutions to these authentication challenges. Advanced MFA methods, biometrics, and passwordless technologies provide enhanced security and improved user experience. However, successful implementation requires careful planning, integration with existing systems, and a focus on scalability and user adoption.

Alejandro Leal, Research Analyst at KuppingerCole, will introduce the concept of passwordless authentication, explore its benefits and challenges, and share market insights based on the latest research. He will provide valuable perspectives on the current state of authentication technologies and future trends.

Malte Kahrs, Founder and CEO of MTRIX GmbH, will address practical implementation challenges of MFA and passwordless authentication. He will discuss strategies for overcoming technical hurdles, integrating with Microsoft Entra ID, managing hardware distribution, and ensuring a smooth user experience for successful adoption.




Rise of the Machines - Why Machine Identity Management Has Become Essential

by Matthias Reinwarth In today’s hybrid and complex IT environments, machine identities are multiplying at an astonishing rate. If managing human identities was once the main concern, that focus has shifted drastically. Today (depending on who you ask, vendors, tech experts, analysts..., the figures might vary), there are approximately 45 to even 100 times more machine identities than human ones,

by Matthias Reinwarth

In today’s hybrid and complex IT environments, machine identities are multiplying at an astonishing rate. If managing human identities was once the main concern, that focus has shifted drastically. Today (depending on who you ask, vendors, tech experts, analysts..., the figures might vary), there are approximately 45 to even 100 times more machine identities than human ones, and each one of these machine identities poses a potential security risk if not properly managed. The rapid growth of cloud, DevOps, and automation has spurred this explosion in machine identities, creating a critical need for robust management strategies to ensure secure authentication, controlled access, and safe interaction across digital environments.

Machines Need Identities Too – But Not Just "Machines"

While we often talk about "machines", this term actually covers a wide range of digital entities. Beyond physical machines, today’s IT landscapes include IoT devices, OT systems, bots, applications, technical accounts, containerized services, and even cloud workloads, each of which demands a unique, securely managed identity. Machine identities enable non-human entities to authenticate, communicate, and interact autonomously, safeguarding sensitive data and critical system resources.

This landscape of digital identities is diverse, each with distinct lifecycles, requirements for secure communication, and authentication needs. For instance, IoT devices like connected cars or smart-home systems need robust authentication mechanisms to communicate safely. Industrial OT devices like SCADA sensors need secure identities for data exchange, while Kubernetes clusters and cloud instances require identities to manage interactions within dynamic, cloud-native environments. The complexity and scope of these digital interactions mean that every identity, no matter how short-lived, needs to be handled with precision and care.

Visibility, Control, and Lifecycle Management – Core Challenges

With such a wide array of machine identities, maintaining visibility and control is paramount. However, many organizations struggle to track and manage these identities effectively. As new short-lived identities proliferate in dynamic environments, they often escape detection, leading to potential vulnerabilities. When these identities are inadequately managed, they can become weak points in security, offering potential access points for cyber threats.

Another challenge is lifecycle management. Machine identities, unlike human ones, often have short lifespans and require frequent updates, renewals, or deactivations. If these lifecycles aren’t managed meticulously, organizations risk having outdated, insecure identities lingering in their systems. This unmanaged sprawl of identities can compromise not only security but also compliance with standards such as GDPR or HIPAA. The implications are clear: lifecycle management must be systematic, automated, and responsive to the high turnover typical of machine identities.

The Risks of Poorly Managed Machine Identities

When machine identities go unmanaged, the repercussions can be severe. Unauthorized access to sensitive systems, privilege escalation through compromised identities, and exposed secrets are just a few of the risks. In the absence of effective monitoring, organizations miss out on the timely detection of security threats, allowing vulnerabilities to go unnoticed. Moreover, hard-coded secrets, if left unprotected, become easy targets for exploitation, leading to potential security breaches.

As machine identities proliferate, so too does the attack surface, leaving organizations more vulnerable to unauthorized access and data leaks. This is particularly problematic in industries where compliance and security are paramount, as mismanaged identities can lead directly to regulatory violations.

Machine Identities in a Zero Trust Framework

With Zero Trust increasingly central to security strategies, machine identities play a critical role. In a Zero Trust model, no machine is assumed to be inherently trustworthy; every interaction requires authentication and verification. This approach is essential in today’s multi-cloud and hybrid IT landscapes, where machines frequently interact across potentially insecure networks. With technologies like mutual TLS (mTLS), machine identities enable secure communication between devices, ensuring that only authenticated entities can access critical resources.

In a Zero Trust framework, machine identities not only secure communication but also enable ongoing verification of interactions. This principle is foundational to establishing and maintaining trust, both for human and machine identities, within an organization’s digital ecosystem.

Secure Secrets Management – An Essential Pillar

Effective management of machine identities demands secure handling of “secrets” - API keys, SSH keys, certificates, and other credentials essential for authenticating machine communication. These secrets need to be stored securely, rotated regularly, and managed centrally to reduce human error and prevent misuse. Automated secrets management allows organizations to scale this process to handle the vast numbers of identities typical in a modern IT environment, ensuring that each identity’s lifecycle is managed securely from creation to deactivation.

Integrating secrets management into a comprehensive identity governance framework provides additional layers of security. This approach not only minimizes security gaps but also enforces consistent security practices across both human and machine identities.

Key Takeaways: The Essentials of Machine Identity Management Machine Identities as a Foundation for IT Security
Machine identities are indispensable for secure interactions and communications in modern IT environments. Scaling with Growth 
The exponential increase in machine identities demands robust, automated management to keep pace with this growth. Lifecycle Management for Security
Systematic management of identity lifecycles mitigates the risks posed by outdated or uncontrolled identities. Secrets Management to Close Security Gaps
Proper secrets management is vital for protecting machine identities and preventing security breaches. Integration with Identity Governance
Machine identities should be part of a unified identity governance framework to ensure consistent security policies. Accountability Through Ownership
Clear assignment of responsibilities is crucial for maintaining the security and traceability of machine identities. A More Precise Term for Identity Diversity
The term "machine identities" may need refinement to better capture the diverse range of non-human identities in today’s digital environments.

In short, machine identity management is not only critical but complex, requiring organizations to adopt structured, automated, and comprehensive approaches. In a world where machine interactions outnumber human ones, secure identity management is not optional - it’s essential.


SailPoint Atlas - Unified Identity Security Platform

by Nitish Deshpande SailPoint Atlas is a unified identity security platform that focuses on identity security by combining modern technologies such as AI and machine learning. A technical overview of the SailPoint Atlas is included in this KuppingerCole Executive View report.

by Nitish Deshpande

SailPoint Atlas is a unified identity security platform that focuses on identity security by combining modern technologies such as AI and machine learning. A technical overview of the SailPoint Atlas is included in this KuppingerCole Executive View report.

auth0

Your B2B SaaS App Just Got Better

Machine-to-Machine Access for Organizations reaches General Availability (GA), unlocking SaaS APIs for developers
Machine-to-Machine Access for Organizations reaches General Availability (GA), unlocking SaaS APIs for developers

Northern Block

A Summary of Internet Identity Workshop #39

Highlights from IIW39, which took place between October 29th and 31st, 2024, at the Computer History Museum in Mountain View, California. The post A Summary of Internet Identity Workshop #39 appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post A Summary of Internet Identity Workshop #39 appeared first on Northern Block | Self Sovereign Identity Solution Provid

(Images used in banner courtesy of Ankur Banerjee, @ankurb)

 

Introduction

Below are my personal highlights from the Internet Identity Workshop #39, held from October 29–31, 2024, at the Computer History Museum in Mountain View, California. The Internet Identity Workshop (IIW) is a one-of-a-kind, unconference-style event that gathers professionals across the digital identity space to openly discuss, debate, and innovate. IIW39 set a record for attendance, with 178 sessions, giving us the opportunity not only to stay up-to-date but also to contribute through sponsorship and active participation, reinforcing our commitment to this evolving field.

Images courtesy of Internet ID Workshop (@idworkshop)

Our team left inspired by the range of perspectives and in-depth conversations and are excited to share some of the key takeaways relevant to digital credential ecosystems. To organize the insights, I’ve grouped the most impactful sessions into three themes: trust establishment, adoption, and tech stack updates. These themes helped me categorize sessions that stood out and offered valuable perspectives for our work in digital credentials, wallets, and trust establishment infrastructure.


#1 – Trust Establishment

This IIW featured many discussions around governance, trust registries and trust establishment.

Progressive Trust in Issuer Registries with LinkedClaims

This session explored the concept of “progressive trust” in issuer registries, where entities can initially join a trust registry with minimal requirements and gradually build their credibility over time by adding claims. LinkedClaims was proposed as a potential solution to enable this approach, allowing ecosystem participants to add claims to a trust registry incrementally, thereby increasing their level of assurance as they demonstrate further compliance or meet additional standards. By setting low initial barriers for inclusion, this model supports a more accessible and open ecosystem, where entities can start with a basic level of trust and enhance it progressively. This approach provides an inclusive framework for building transparency and encouraging a steady flow of verifiable claims, enabling credentials to gain broader acceptance across different ecosystems as entities solidify their trustworthiness.

 

Well-Attended Discussion on Bridging Trust: DIDs, DNS, and X.509

Another session that ultimately brings trust establishment into the discussion was focused on creating layered assurance by bridging decentralized identifiers (DIDs) with established infrastructures like DNS and X.509. This hybrid approach allows any entity—not just credential issuers—to build more assurance by combining DIDs with established, trusted systems. This setup is particularly valuable for organizations with a strong digital presence, as it lets them leverage existing DNS or certificate frameworks to increase the assurance of their identity or credentials. We’ve already implemented this concept with DNS bridging in our IETF draft on High Assurance DIDs with DNS, demonstrating how entities can use this approach to create dependable, transparent interactions. As one of the co-chairs of the High Assurance VID Task Force (HAVID), I’m actively engaged in advancing this approach, proving that layered trust realms can support higher assurance in decentralized ecosystems.

A diagram provide by Dr. André Kudra which was showed in the IIW session

European Union Digital Identity Wallet (EUDI Wallet) Relying Party Authentication

The topic of relying party authentication for the EUDI Wallet sparked enough discussion to span two sessions. The first session on day 2 raised several open questions around the best approach for authenticating relying parties, leading to a follow-up session on day 3 to further unpack the issues.

One of the key points in discussing EUDI Wallet’s architecture was the requirement for relying parties to provide certain data about themselves to the wallet and, by extension, to the holder. This requirement, stemming from the EU’s eIDAS regulation, allows the holder to have insight into what data a relying party wishes to access and how they intend to use specific credentials. This transparency is essential for enabling informed decisions by the holder and safeguarding data privacy.

Various technical options were explored for implementing this authentication, including traditional X.509 solutions, OpenID Federation, and SD-JWTs (selective disclosure JSON Web Tokens). Each approach has unique strengths and challenges, with OpenID Federation emerging as a flexible option for interoperability. However, concerns around the complexity of the OpenID Federation specification led to discussions on simplifying or segmenting it to make it more accessible, particularly for the EUDI Wallet context.

Northern Block has been actively investing in implementing OpenID Federation across our solutions, aligning with the standard’s potential for fostering trust and interoperability in digital credentialing. Yesterday, on November 6, 2024, we presented an update at a Findynet-hosted event, sharing insights on our progress. A recording of the session is available on the event meeting page for those interested in learning more.

Additionally, the sessions considered how OpenID Federation might integrate with the European Blockchain Services Infrastructure (EBSI) and other European trust establishment technologies, potentially serving as an abstraction layer to connect multiple verification methods. While OpenID Federation shows promise for trust establishment in the European context, the sessions underscored that simplifying the spec could be key to overcoming current barriers. There’s clear interest in OpenID Federation’s role in the European market, and as this work evolves, it could provide a streamlined path for cross-border compatibility and trust in digital credentials.


#2 – Adoption

IIW39 offered a strong forum to gauge the state of adoption in digital credentialing and examine what’s required to drive it forward.

 

“Has Our SSI Ecosystem Become Morally Bankrupt?”

In one of the very many thoughtful sessions at IIW39, Christopher Allen raised a challenging question: has the self-sovereign identity (SSI) ecosystem strayed from its founding principles? His blog on the topic served as inspiration for the session. Allen questioned whether current implementations are compromising core SSI values—such as existence, control, access, transparency, and protection—that were foundational to the concept of self-sovereign identity. Increasingly, we’re seeing the industry willingly delegate key functions to platform providers, often replicating centralized or federated models that limit user control and freedom.

As examples, Allen pointed to the rise of mobile driver’s licenses (mDLs) and DID implementations such as did:web. These approaches may gain traction through their ease of adoption and existing infrastructure but risk overlooking some key principles as mentioned above. This trend raises concerns about whether these solutions are being designed in a way that prioritizes control for platform providers rather than the individuals using them. Allen’s critique highlights how some modern implementations of SSI risk sacrificing these core principles for the sake of convenience or widespread adoption.

From my perspective, these principles remain the goal for myself, our company, and many collaborators in the industry. However, achieving true self-sovereignty in a scalable way involves navigating significant structural and funding challenges. 

Much like the internet was seeded by the U.S. government through projects like ARPANET, where initial government funding was critical to establishing its foundations, digital trust infrastructure requires substantial investment to reach critical mass. This foundational funding enabled others to build value on the internet through commercially driven models that continue to reshape society as a whole. Today, governments and large organizations—particularly those with a public benefit as their core mission—are often the only entities capable of making this level of investment, viewing digital trust infrastructure as a form of public infrastructure that justifies their funding.

But with funding comes influence. Governments and large entities exercise control over their constituents through controls (e.g., rules, laws, and regulations)—frameworks that don’t always align seamlessly with the digital world’s principles of openness and user autonomy. This creates a tension between the need for investment to build digital public infrastructure and the inherent incentive models these large entities operate under, where control and oversight are often prioritized. This represents a larger struggle in balancing innovation with institutional authority, especially as digital identity and trust infrastructure continue to develop.

In my view, balancing SSI’s principles with these real-world constraints isn’t an all-or-nothing endeavor. Each implementation should strive to maximize user control, privacy, and transparency, even if some trade-offs are necessary. The investments we’re seeing are undeniably driving amazing advancements, and it’s a matter of taking the best parts and continuously improving upon them. This isn’t a zero-to-one leap but rather a journey of chipping away at constraints, making incremental progress toward a digital world that aligns more closely with self-sovereign ideals.

This session was an important reminder for me—and for all of us in this space—not to lose sight of the vision and principles that brought us here. Even as we navigate complex environments, we must stay grounded in the values that underpin SSI, ensuring they remain central as we move forward, one step at a time.

 

Public Sector Momentum and Cross-Ecosystem Acceptance

There continues to be significant momentum in the public sector around digital credentialing, with the U.S., Canada, Europe, and other regions like Bhutan each advancing in their own unique ways. In the U.S., states are increasingly adopting mobile driver’s licenses (mDLs), with many offering digital driver’s licenses through platforms like Apple and Google Wallets, while others provide their own state-specific wallets. Similarly, Canadian provinces are moving forward with their own digital wallets, and the European Union is working toward nation-state-approved wallets as part of a cohesive digital identity strategy. Each region’s approach reflects key differences and nuances in the technical stacks and governance models across these public sector ecosystems. Bhutan’s launch of its National Digital Identity (NDI) project exemplifies how even smaller nations are adopting digital credentials, contributing to a global trend in verifiable credentials across public sector initiatives.

While the public sector is a key driver, there are notable differences in approaches across these regions. Organizations like the Global Acceptance Network (GAN) are essential in bridging these varied approaches, fostering cross-ecosystem compatibility through multiple sessions and discussions around trust establishment at IIW39. For readers interested in how GAN supports the adoption of verifiable credentials across sectors and regions, we recommend our recent podcast episode on GAN’s ecosystem, which delves into its development and vision.

For anyone seeking a lay of the land in public sector credentialing, Northern Block has a strong perspective from our work in both North America and Europe. Feel free to reach out to us for further insights into how digital credentialing is evolving in the public sector across these regions.


#3 – Technical Updates

With the rapid evolution of standards and interoperability frameworks, IIW39 highlighted some of the latest tech stack advancements that are shaping digital credential ecosystems.

 

Digital Credential Query Language (DCQL)

The Digital Credential Query Language (DCQL) proposes to offer a streamlined solution to the complexity of existing credential presentation models, presenting a simplified, structured approach to querying credentials. Developed as part of the upcoming Implementer’s Draft for OpenID4VP, DCQL is designed as an alternative to Presentation Exchange (PE), which, though flexible, has become complex and challenging to implement. With dependencies like JSONPath, regular expressions, and extensive schema filters, PE can be cumbersome and potentially insecure, especially in browser-based environments.

DCQL aims to address these issues by introducing a more straightforward, JSON-based syntax that is largely credential format-agnostic, allowing for simpler and faster implementation. By reducing optional elements and removing complex dependencies, DCQL lowers the technical barriers for organizations adopting digital credentials, making credentialing solutions easier to implement and scale. However, as the adoption of DCQL grows, it is expected to coexist with PE, creating a phase where both standards are in use. This dual adoption could lead to interoperability challenges, as some organizations might choose to implement only one standard. DCQL’s simplified approach thus highlights the need for careful handling of interoperability across digital identity ecosystems, especially where both PE and DCQL are expected to operate.

Although initially specific to OpenID4VP, DCQL’s adaptability has the potential for broader use, supporting a more consistent and accessible querying standard as digital identity implementations grow across ecosystems.

 

Google’s Zero-Knowledge Proof (ZKP) for Mobile Credentials

Google introduced an advanced, high-performance ZKP for mobile environments, which represents a significant breakthrough in privacy-preserving credentials. With this implementation, users can present specific claims without revealing additional data, aligning with SSI principles. The optimization of ZKPs for sub-second performance opens new doors for real-world use cases in identity verification. As this technology becomes more accessible, it could drive widespread adoption across industries that require privacy-centric solutions for sensitive interactions.

 

Revocation and Status Mechanisms Comparison

Managing credential status and revocation is essential, particularly for high-volume and regulatory-sensitive use cases. The session on revocation mechanisms provided a detailed comparative analysis of various approaches, evaluating them on key criteria such as scalability, privacy, security, and deployment readiness. These comparisons offer digital identity architects a clearer framework for selecting revocation methods that best align with their operational needs and compliance requirements. As digital credential ecosystems grow, a flexible approach to revocation—one that adapts to different regulatory environments and use cases—will be increasingly critical. For more details, you can view the session slides here.

 

Conclusion

IIW39 consistently provides a lens into the current adoption cycle and maturity of digital credential and wallet ecosystems. As digital identity continues to grow, events like IIW serve as critical forums to assess the evolving landscape of digital credentials, standards, and wallet functionalities. For organizations navigating this space, these insights highlight the importance of transparent governance backing credentials and ecosystems, practical adoption strategies, and streamlined technical solutions that simplify yet secure digital interactions.

I hope this summary was useful to readers. As always, feel free to reach out to me directly at mathieu@northernblock.io or connect with me on LinkedIn if you’d like to discuss these topics further. We’ll be attending the next Internet Identity Workshop, IIW40 (IIWXL), in Spring 2025 from April 8 to April 10, and we urge anyone who finds this discussion interesting to consider joining us there.

–end–

The post A Summary of Internet Identity Workshop #39 appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post A Summary of Internet Identity Workshop #39 appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Dock

Reusable KYC: What it is, benefits and impact on ID companies

The current landscape of Know Your Customer (KYC) processes is marked by inefficiencies that create friction, drive up costs, and frustrate users. Customers are required to repeat KYC procedures every time they engage with a new service, even if the same KYC provider is behind the scenes.  This leads

The current landscape of Know Your Customer (KYC) processes is marked by inefficiencies that create friction, drive up costs, and frustrate users. Customers are required to repeat KYC procedures every time they engage with a new service, even if the same KYC provider is behind the scenes. 

This leads to high drop-off rates, as customers lose patience with slow, redundant processes. 

Reusable KYC offers a transformative approach by allowing users to complete KYC once and reuse their verified identity across multiple services, significantly enhancing the user experience and operational efficiency for businesses.

In this article we’ll go through what Reusable KYC is, its benefits and how it can be enabled by centralized and decentralized technologies.

Let's dive in!

Full article: https://www.dock.io/post/reusable-kyc


Ocean Protocol

DF114 Completes and DF115 Launches

Predictoor DF114 rewards available. DF115 runs Nov 7 — Nov 14th, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 114 (DF114) has completed. DF115 is live today, Nov 7. It concludes on November 14th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&nbs
Predictoor DF114 rewards available. DF115 runs Nov 7 — Nov 14th, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 114 (DF114) has completed.

DF115 is live today, Nov 7. It concludes on November 14th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF114 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF115

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF114 Completes and DF115 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What is Biometric Authentication? Methods & Security Features

What is biometric authentication? Explore methods, effectiveness, and security features to see if it’s right for your organization.

 

As our world becomes increasingly digital, there is a growing need for more secure identity verification methods to replace the faulty password security that is still widely used. 

 

Biometric authentication has emerged as a strong method for safeguarding network and facility access, with a wide range of possible applications from healthcare to hospitality and nearly all industries in between. 

 

Offering enhanced security while providing users with a more streamlined log-in experience, it’s likely that many of us already use biometric authentication in our day-to-day lives, even if we weren’t aware of it. 

 

Throughout this article, we will explore the different biometric technology methods available, their security features, and help you decide if it’s a suitable option for your organization.

Wednesday, 06. November 2024

UbiSecure

Tips for designing your Sign-In

In today’s digital landscape, offering only one sign-in method, such as a username and password, is no longer sufficient to meet the... The post Tips for designing your Sign-In appeared first on Ubisecure Digital Identity Management.

In today’s digital landscape, offering only one sign-in method, such as a username and password, is no longer sufficient to meet the diverse needs and expectations of users. As technology evolves and global markets expand, it’s imperative for websites and apps to provide multiple, secure, and convenient login options. By doing so, businesses can enhance user experience, improve accessibility, and strengthen their competitive position. In this blog post, we’ll explore the numerous benefits of offering alternative login methods, from wider device support and increased security to enhanced user satisfaction and operational efficiency.

Providing alternative login methods has many benefits for users and online service providers:

Wider user reach
Different regions, countries and user groups all have their most preferred way to login to consumer or business services. For example, BankID in Scandinavia, Finnish Trust Network in Finland, LINE in Japan, WeChat in China etc. Not offering the common services for the region where you users are coming from can limit service adoption and sales. Access to download various smartphone authenticator applications may also be limited to certain app store regions. Protection from lost sales and lost business
The inability to log in, for whatever reason – technical or non-technical, creates user frustration, delays and often times lost business. For an end-user, it can be easier to log in to a competitor’s service than to work out why login to your service is failing. When passwords are forgotten or text messages never arrive, having alternative options on offer increases the chance of the user signing in without any further assistance required. In the same way online shopping carts are abandoned when payment services are too difficult to use, users who can’t log in never even get a shopping cart in the first place. Operating cost management
Some third-party authentication services can add significant operating costs as the number of login events increases. Costs associated with an identity provider service may be easier to negotiate if the there are alternatives already available to use. Where there are multiple authentication options, these can be presented in an order that encourages the selection of the most cost-effective option. Technical redundancy
Imagine that the authenticator app or email client that you use continually crashes for some reason due to an unexpected mobile operating system update. Unable to click on a notification or get a generated one-time password, you are locked out of your account. Sometimes login systems are down for maintenance, upgrades, network issues or because of unforeseen difficulties. In these cases, instead of contacting support, choosing the button to sign in using an alternative provider is faster and easier. This lets the user solve their own login problems without any support burden and related costs. Wider end-user device support
Providing only “Sign in with Apple” or “Sign in with Google” makes things difficult if the user ever leaves the respective Apple or Google ecosystem, even if your app or service is targeted certain platform users only. Some organizations even have policies that forbid their employees from using non-corporate login systems for business use. Users could be shut out from accessing their personal information or historical records. Supporting multiple sign in methods enables users to securely access their data if they change devices or operating systems. Dealing with life’s little surprises
Consider the situation where SMS one-time password is the only MFA option, but the SMS never arrives, due to network failure, being out of network range, having a flat battery, a broken screen, lost or misplaced phone or service subscription halted due to an unpaid phone bill. It’s nice to have another way to sign in in these cases. Improved accessibility
For users with disabilities, the ability to use the authenticator or identity provider of their own choice can allow them to access online services without assistance. Different authenticators suit different users, some don’t work at all for parts of the user community. End-user device compatibility
Access to download various smartphone authenticator applications may also be limited to certain app stores, be region locked or be incompatible with user devices in the field running older operating systems. Helping to avoid unwanted surveillance
Repeatedly logging in via the same identity provider has the potential to inadvertently allow tracking your behaviour closely. By using different providers, or choosing authentication methods that are not inherently traceable by third-parties, users are empowered to choose freely in order to protect their own privacy. Avoiding identity provider lock-in
If there is a data breach or other security event at an upstream identity provider, immediately disabling it is the fastest approach to avoid collateral attacks. Disabling a provider is easy when there are many other alternatives still available to use. Service continuity readiness requires planned, ready-to-go contingencies. Identity providers can also cease operating at short notice for other commercial or legal reasons. Do not keep your eggs in one basket. Diversifying the range of sign in options mitigates the risks of individual solutions. Meeting compliance requirements
Depending on the nature and jurisdiction of the application, where sensitive, private and/or personal information is processed, compliance with relevant security, privacy and usability legislation is mandatory. Different types of transactions may require different authentication techniques mandated in legislation. This legislation can change over time. Being able to add and change authentication methods easily makes staying compliant easier. A good example is the European Digital Identity Framework, which will see the roll out of digital identity wallets for European citizens in the coming years. Public sector services and certain industries will be forced to allow sign in using these new wallets. Ready for the future
Technology and legislation is changing at a rapid pace. Authentication protocols, products and techniques adapt to these changes. Being ready for new trends and changes in user expectations with regard to sign-in techniques requires that applications can easily add, remove or change the sign in methods offered. Adding newly emerging biometric authentication, authentication methods based on quantum-resistant cryptography solutions or emerging AI-supported authentication tools should be a matter of reconfiguration rather than application redesign. Designing and planning for multiple sign-in methods with best practices

Fortunately, many commercial software applications today are designed to support externalised user authentication and authorization. These applications can be configured to be connected to a Identity Provider Broker, either hosted in the cloud, or run locally on-premise. This Identity Provider Broker, or IdP Broker for short, is responsible for the secure communication with various identity services and authentication methods. It is responsible to present a list of the various different login options and all of the complex logic to integrate with these methods and services.

When planning the design of a new online service, the product manager, architect or product owner should insist that user authentication is performed outside of the application itself. This is sometimes called single sign-on (SSO) support, federated identity support, externalised identity or referenced using the terms of related protocols, like OAuth, OAuth2, OpenID Connect or SAML. It accelerates product development and simplifies the logic of the online service.

Even older, legacy applications and services can be modified to replace built-in authentication options with externalised authentication with minor application changes.

Supporting multiple sign in methods is a first step

Once authentication has been externalised and multiple sign in methods are supported, this opens the doors to other powerful functions that can enhance user experiences:

Support for teams and groups
An external identity provider can also provide information to an application about an individual’s membership to an organisation, be it a company, team, club or family. This enables convenient sharing of information and responsibilities within an online service. Cross-organisation collaboration and information sharing
Sharing is not limited to your own organisation – information can be gathered from or distributed to users at other organizations – such as partners, suppliers, customers and sub-contractors. An application that is integrated with an externalised identity management system can get and access to these rich connections and permissions without building it all into their own service. Performing tasks on behalf of someone else
Often times, the person using an online service is doing something on another person’s behalf. It may be a consultant helping a client to get things done or an adult doing something for their elderly parents, or a care-giver assisting a person in need. This should not be done by sharing sign in credentials, rather by authorising the other party to do these tasks. Performing tasks on behalf of another organisation
In business, outsourcing of certain functions to another organisation is commonplace. These partners need access to the client firm information and tools provided by online services. This can be achieved through externalised authorization. Do you need help adding more authentication and authorisation options to your online service?

Ubisecure offer software and services to allow your customers to sign in using the authentication method that they choose, from a range of options that match your security choices. Different ways to sign in can be added or removed as requirements and markets change. Support for teams, groups and on behalf of use cases can be added to new and existing services. Contact Ubisecure today for more information and a no-obligation demonstration.

The post Tips for designing your Sign-In appeared first on Ubisecure Digital Identity Management.


Spruce Systems

Meet the SpruceID Team: Dani Johnson

Dani, Head of Operations at SpruceID, brings extensive experience in managing a wide range of responsibilities, from finance to people operations.
Name: Dani Johnson
Team: Operations
Based in: Seattle, Washington About Dani

I’ve worked in business operations throughout my career, and SpruceID is my second software startup. I wanted to work on the most challenging and innovative technology I could find. When I found SpruceID it felt like a perfect fit: a great home for my existing skills where I could have a broad portfolio of responsibilities, as well as an exciting set of fresh challenges.

Can you tell us about your role at SpruceID?

As the Head of Operations, I manage accounting and finance, people operations, compliance, and the rhythm of business. I work with our outstanding legal and accounting teams and oversee financial audits and SOC 2 audits. I also work closely with our CEO and lead special projects of all shapes and sizes.

What do you find most rewarding about your role?

My role is always evolving to cover new ground, so I always have something new to learn. At SpruceID I have access to so many expert minds, and it is incredibly rewarding to be able to soak up new subject matter expertise on a regular basis.

What are some of the most important qualities for someone in your role to have, in your opinion?

Integrity, drive, and an intensely meticulous and organized nature. I was one of those little kids that always colored inside the lines. My plastic dinosaurs were in order on the shelf.

What are you currently learning, or what do you hope to learn?

I am currently working a lot on international initiatives, so I am learning about corporate establishment, banking, contracting, and employment in some jurisdictions outside the US. Fascinating and sprawling.

What has been the most memorable moment for you at SpruceID so far?

Some of my most treasured SpruceID memories are of experiences we’ve had as a team at our team gatherings. Scrambling to get the wifi working in our ad hoc offices in Kyoto, eating together in one of the shacks on Copacabana Beach, singing along to Irish traditional folk music in Dublin.

What is some advice that you’d give to someone in your role who is early in their career?

Be worthy of the trust your organization has in you.

How do you define success in your role, and how do you measure it?

When the organizational operations are running smoothly it frees up the rest of the team to innovate and explore, so in some ways I measure success in my role by how little everyone else needs to think about it. Like great service at a restaurant, you don’t really notice it, you just notice that you have what you need. That is my goal.

Fun Facts


What do you enjoy doing in your free time?: Traveling near and far, cooking, eating, and taking long audiobook walks.

What is your favorite coding language (and why?): Rust, of course!

If you could be any tree, what tree would you be and why?: Any kind that involves animal visitors. Pinyon pine for bear visitors, or one of those argan trees the goats climb. A big fir tree for owl and squirrel friends would be fine.

Interested in joining our team? Check out our open roles and apply online!

Apply to Join Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Trinsic Podcast: Future of ID

David Kelts - From Idemia to Decipher Identity and the Evolution of Mobile IDs

In this episode of The Future of Identity Podcast, I’m joined by David Kelts, a leader in digital identity and mobile ID initiatives, with a career that spans significant contributions across multiple companies and initiatives worldwide. David's insights shed light on the journey of mobile driver’s licenses (mDLs), the evolution of identity verification, and his current role at Decipher Identity,

In this episode of The Future of Identity Podcast, I’m joined by David Kelts, a leader in digital identity and mobile ID initiatives, with a career that spans significant contributions across multiple companies and initiatives worldwide. David's insights shed light on the journey of mobile driver’s licenses (mDLs), the evolution of identity verification, and his current role at Decipher Identity, where he’s tackling adoption challenges and working with businesses to expand use cases for digital identity.

We explore:

- David's early work at Idemia, including pioneering efforts in connecting driver’s licenses to online identity proofing.
- The origin and adoption challenges of mobile driver’s licenses (mDLs) and why adoption has lagged behind expectations.
- Privacy concerns surrounding digital IDs and the misconception of "phone home" tracking in mobile identity, along with how privacy regulations are influencing this space.
- The role of standards organizations and government agencies, like AMVA and TSA, in fostering privacy and security in digital credentials.
- The future vision for digital identity, including the potential for digital-native identity credentials, cross-border use cases, and the value of user choice in secure digital wallets.

David also shares stories from working directly with states like Utah and California on mDL projects and reflects on what’s needed for broader adoption. This episode is a deep dive into the evolving landscape of digital identity and is perfect for anyone interested in the future of authentication, privacy, and user-centric identity solutions.

You can learn more about Decipher Identity at decipher.id.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


IDnow

Exploring the customer onboarding differences in global gambling markets.

Are you ready to play? We explain why gambling operators should always do more than the bare minimum when onboarding players. The gambling industry has never been more lucrative.   Bolstered in large part by the proliferation of easily accessible online gambling platforms, the industry was worth over $85 billion dollars last year. By 2029, that figure […]
Are you ready to play? We explain why gambling operators should always do more than the bare minimum when onboarding players.

The gambling industry has never been more lucrative.  

Bolstered in large part by the proliferation of easily accessible online gambling platforms, the industry was worth over $85 billion dollars last year. By 2029, that figure is expected to leap to $133 billion. There are multiple reasons for the industry’s projected rapid growth, including improved access to high-speed internet and reliable payment systems. Another is the opening of numerous currently unregulated markets around the world, especially in Latin America. 

While for the consumer, playing online casinos, online lotteries or online sports betting may be a similar experience across countries, there are certain differences in onboarding dictated by specific regional regulatory environments. 

Of course, for the customer, this all happens in the background so is rarely a consideration. Gambling operators, however, especially those that are keen to expand into new territories should be aware of every nuance of the customer onboarding process, every regulatory environment, every document or data point required to be compliant and every product functionality that could make the customer experience more inclusive, intuitive and secure. 

In the ‘Worth the risk: Why gambling operators should always do more than the bare minimum when onboarding players’ ebook available below, we explore the different customer onboarding journeys from some of the world’s most popular gambling markets, including the United Kingdom, Brazil, Ontario and many more.

How gambling operators should really be onboarding players. Download to discover: Most common fraud attacks that gambling operators were subjected to in 2023. Size of some of the most popular global gambling markets. The steps required to legally onboard players in nine different countries Download now

While onboarding requirements differ across regions, at IDnow, we believe it is crucial to protect vulnerable individuals and verify that customers are who they claim to be, regardless of the market.

Roger Redfearn-Tyrzyk, VP of Global Gaming.
Know Your Player?

Although the gambling industry has never offered more opportunity, it has also never been under greater regulatory scrutiny, or at greater risk of fraud attacks or bonus abuse. 

Other challenges facing operators include the ever-increasing cost of player acquisition, and the need to comply with AML, data privacy and responsible gambling requirements. Not to mention that, like most industries nowadays, there is a user expectation for seamless, secure and frictionless 24/7 online experiences. The optimal time to address these challenges is at the beginning, during the identity verification and customer onboarding process.

The role of the gambling regulator.

The regulator’s role is to protect players. They do this by devising and regularly revising regulations and issuing fines for operators that do not comply.

As new jurisdictions open and regulators implement tighter control mechanisms, the number of gambling fines are only set to increase. In fact, in 2023, the global industry saw a record number of fines ($402 million), with UK operators subjected to the most fines, followed by Australia, Ontario, the Netherlands and the US. 

Creating regulations that work for the player, the opertaor and the national. market is no easy feat. For example, if taxes are too high or player limitations too strict, then this could push players to the black market. When this happens, governments do not benefit from additional tax and players are not protected. Striking a balance is essential.

Do more than the bare minimum when onboarding players.

At IDnow, we value our trusted relationships with regulatory bodies from around the world. It is these connections and this multi-jurisdictional expertise that allow us to empower operators to confidently navigate onboarding challenges, wherever they are based. 

“To enhance security and minimize risk, we recommend going beyond the basic identity checks by integrating additional screening measures early in the customer journey. Implementing these checks early, ideally before withdrawal, provides better protection and reduces the risk of fraud, safeguarding both customers and businesses from unnecessary exposure to financial harm,” added Roger.

Our layered, holistic approach to identity verification enables operators to add additional layers of assurance by offering a flexible solution tailored to risk appetite and regulatory needs. These layers include a range of verification checks, from data checks and financial risk assessments to biometric and video verification. 

With the ability to scale up or down in line with a country’s specific regulatory needs, IDnow ensures operators maintain robust protection against fraud and other risks, while delivering a seamless and compliant customer journey.

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Caribou Digital

Meeting the target and missing the point: Putting society at the center of digital public…

Meeting the target and missing the point: Putting society at the center of digital public infrastructure Written by Jessica Osborn — CEO, Emrys Schoemaker —Senior Director of Advisory & Policy, and Niamh Barry — Senior Director of Measurement & Impact, all at Caribou Digital. Alongside this year’s World Bank and IMF Annual Meetings, and following the insightful Co-Develop DPI Summit
Meeting the target and missing the point: Putting society at the center of digital public infrastructure

Written by Jessica Osborn — CEO, Emrys Schoemaker —Senior Director of Advisory & Policy, and Niamh Barry — Senior Director of Measurement & Impact, all at Caribou Digital.

Alongside this year’s World Bank and IMF Annual Meetings, and following the insightful Co-Develop DPI Summit in Cairo earlier in the month, Caribou Digital participated in several conversations on the social and economic impact of digital public infrastructure (DPI).

Together, these events demonstrated a welcome shift in the conversation toward the importance of putting people at the center of DPI’s design and implementation in order to increase adoption and use. Yet, this people-centric approach seems more nascent in discussions of DPI’s measurement and impact, which still often centers on institutional efficiency and access. While these are important goals (and often the initial impetus for DPI implementation), by omitting nuanced consideration of people-level impact we risk — at best — missing an opportunity for DPI to drive more meaningful development outcomes and — at worst — DPI causing people harm. Digital transformation affects the lived experiences of citizens in very real ways, and by bringing into view goals on inclusion, agency, and empowerment, we uncover a whole range of metrics that must be considered to ensure that the impact on people’s lives is positive. The need to build an efficiency-based investment case for DPI should not trump the need to build the human impact case.

DPI’s outcome problem: A “shared means to many ends”

That people are underrepresented in the conversation on DPI measurement is symptomatic of the fact that, while there is growing consensus around the “whole of society” approach to DPI implementation, this is still nascent when it comes to measuring DPI’s impact. DPI is an emergent system that is deeply interconnected, and as such it requires a systems-level theory of change and measurement approach.

The description of DPI as “a shared means to many ends” highlights the numerous possibilities of use and, therefore, the numerous potential outcomes for different actors within a given system — government, civil society, private sector, businesses, households, and individuals. These are connected actors; thus, impact and information flows are also multidirectional.

As a DPI community, we have many reasonable hypotheses (see Caribou’s illustrative examples below) but not a coherent narrative on the multitude of outcomes that DPI — in its diverse forms — could enable. A shared understanding of DPI’s potential outcomes for different system actors could unlock multi-stakeholder collaboration on the “right measures” and mitigate the risk of misalignment and diminished effectiveness. Investing time in defining outcomes is crucial, ensuring they reflect the voices and needs of all stakeholders. Only then can metrics that genuinely serve these outcomes be defined.

Caribou’s illustrative examples of DPI outcomes

Metrics align intention and value
“When a measure becomes a target, it ceases to be a good measure.”
Goodhard’s Law

Metrics are useful ways of measuring outcomes — but only when they are aligned with a broader understanding of potential impact. Fundamentally, outcomes are expressions of what is valued; they reflect intention and galvanize collective action around what gets measured. A focus on misaligned outcomes can have lasting and challenging real-world effects; the financial inclusion sector’s fixation on account access, exacerbated by global measurement tools like Findex, is a case in point. The onus is on us as a development community to ensure an inclusive and “whole of society” approach to defining and measuring the changes that can result from DPI to drive genuinely inclusive and meaningful impact.

Who gets to define these outcomes is also a critical question involving power dynamics that influence whose voices and needs are prioritized. DPI is necessarily a state-driven initiative, but it implies a rearticulation of at least a triad of relationships in the social contract: between the state and individuals, between individuals and the market, and between the state and the market. There are power dynamics and deeply held (and sometimes contested) values underpinning all three relationships, pointing to the complexity and necessity of involving all stakeholders in finding common ground and defining outcomes that matter.

A moment in time for DPI measurement

C. V. Madhukar has said that we are at a unique moment in time in digital transformation. This unique moment offers an opportunity for the development community to align key stakeholders on a common set of DPI outcomes and the right metrics to measure those outcomes. These metrics could: 1) provide a clearer understanding of the benefits DPI delivers to different groups; 2) reveal the risks of DPI, so that products and services can course-correct, and 3) enable comparisons between approaches that could help define “Good DPI”, akin to the influential efforts to mobilize consensus around “Good ID.”

This clarity could guide funding decisions and channel resources toward solutions with the greatest potential for impact. Defining such a measurement framework requires a systems-focused theory of change that incorporates individuals, businesses, civil society actors, and governments, and that is underpinned by a critical synthesis of the existing evidence (in this regard, DIAL and Co-Develop’s forthcoming DPI Evidence Compendium is an excellent first step).

Digital development measurement practices can show the way

While the multifaceted nature of DPI presents a measurement challenge, we are not starting from scratch. As a digital development community, we have learned a great deal from measuring digital initiatives, and these form a valuable knowledge base from which to start. Some key learnings:

Prioritize outcomes over adoption metrics. Measurement systems reflect values and intentions, and we must prioritize outcomes tracking alongside — easily and digitally obtained — adoption tracking to ensure that decision-making extends beyond access and use. Funders and implementers should measure the change they want to see in order to drive inclusive impact. Based on their extensive experience supporting DPI implementation, Public Digital’s call to measure value from the perspective of service users is an important reminder to focus on outcomes. Building on this, we could also draw on Amartya Sen’s influential “human capabilities” approach, as well as C. V. Madhukar’s emphasis on societal capabilities to consider outcomes from a multi-stakeholder perspective. To make a compelling case for DPI, it must be clear that DPI makes a real difference in the public’s lives and that there must be a swift response to any harm — something that matters to politicians, policymakers, planners, implementers, and, most importantly, people. Adopt a systems-focused, complexity-aware theory of change. DPI warrants a systems-led, complexity-aware theory of change and measurement framework informed through system mapping, evidence synthesis, and deep and wide stakeholder consultation. As DPI is both ever-dynamic and advancing rapidly, theories of change must also evolve continuously. This approach should consider both opportunities and risks for various actors engaging with DPI. Without identifying all sides, we risk a one-sided view of impact, potentially overlooking significant risks to different stakeholders. Developing a nuanced and adaptive theory of change can support DPI to be responsive, equitable, and impactful for all involved. Embed iterative measurement within tech systems. Data on metrics can often be captured in real time using digital solutions themselves, enabling feedback loops that drive continuous improvement. Such cost-efficient embedded measurement and adaptive management approaches can ensure that DPI initiatives focus on delivering public value beyond deployment and adoption. Utilize a multi-method approach. Iterative measurement (above) may need to be triangulated with other instruments in order to capture all required data. Findex-type survey data may be required to gather some data points. Additionally, literature measurement can act as a “purpose navigator,” ensuring that deployments deliver tangible public benefit. DPI impact at a societal scale requires collective action

By building consensus on the outcomes that matter and metrics that measure those outcomes — particularly as they reflect the lived experiences of those impacted — DPI can support inclusive growth, empower individuals, and deliver societal-scale transformation.

The knowledge, tools, and momentum to make a real difference exist, but impact requires collective action and a shared vision.

Please reach out to Jess (jess@cariboudigital.net), Emrys (emrys@cariboudigital.net), or Niamh (niamh@cariboudigital.net) if you would like to discuss our thinking further.

Meeting the target and missing the point: Putting society at the center of digital public… was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

Introduction to the Okta Integration Network

Whether or not you use Okta’s products, you may find yourself working on software whose target audience includes Okta customers. Adding your application to the Okta Integration Network creates a smoother and less error-prone user management experience for these shared customers, and can unlock the potential of additional features as well. For a high-level perspective on the benefits of building

Whether or not you use Okta’s products, you may find yourself working on software whose target audience includes Okta customers. Adding your application to the Okta Integration Network creates a smoother and less error-prone user management experience for these shared customers, and can unlock the potential of additional features as well.

For a high-level perspective on the benefits of building to the open standards supported by the OIN, which also lets you easily support any other identity provider’s integration marketplace, here’s Director of Identity Standards Aaron Parecki:

And to learn about what the integration submission process looks like on a more technical level, the OIN 101 Walkthrough can help:

Check out Okta’s Saas Security page and integrator help hub for more resources.

Follow OktaDev on Twitter and subscribe to our YouTube channel to learn about additional integrator resources as soon as they’re available. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!

Tuesday, 05. November 2024

1Kosmos BlockID

Digital Identity Spotlight: Thailand

The nation of Thailand has a ready response for governments around the world seeking insights on implementing digital identity at scale: Phuket. In recent years, the Thai island paradise of Phuket—long known for its pristine beaches, stunning waterfalls, and vibrant nightlife—has transformed itself from a resort town to a smart city. Its thriving technology sector … Continued The post Digital Id

The nation of Thailand has a ready response for governments around the world seeking insights on implementing digital identity at scale: Phuket.

In recent years, the Thai island paradise of Phuket—long known for its pristine beaches, stunning waterfalls, and vibrant nightlife—has transformed itself from a resort town to a smart city. Its thriving technology sector and “smart, safe, sustainable” approach to governance have become a prime model and critical test market for the nation’s expansive Thailand 4.0 strategy. This 20-year economic development plan is designed to turn this Southeast-Asian country of more than 70 million people into a high-tech, high-income powerhouse, supported and enabled by digital identity.

To that end, Phuket has become a pilot region for Thailand’s new digital identification and verification infrastructure—and for good reason. The city’s tourism sector provides an ideal proving ground for using digital identity to verify visa applications, travel bookings, and access to local services in a seamless, all-digital manner. Since launching 16 months ago, the test has been a trial by fire. But it’s one that Phuket’s tech-savvy population is well-positioned to navigate and help refine.

In Phuket, tourists, expats, and locals use a mobile app called ThaID (as in Thai-ID) to register for banking and healthcare services. But the system also has other purposes. To crack down on counterfeit ID cards that have long plagued Phuket’s bustling nightlife venues, this facial biometrics-based mobile digital ID is now required to gain entry to the city’s clubs and bars. Yet, for all their utility, these and other early applications are just a glimpse of what digital identity has come to mean for this nation.

Phuket, Let’s Go: When Digital Identity Is More Than Just Tech

Thailand’s ambitious digital identity initiative is about more than just financial inclusion, ensuring access to services, and securing against mounting cyber threats. In recent months, it has become emblematic of a nation set on reasserting its identity as a hub of digital innovation—and reigniting an economy lagging its regional neighbors.

In recent years, Thailand’s growth has stagnated. Even as per capita income in China, Singapore, and Malaysia has soared, Thailand has struggled to escape what the World Bank’s 2024 Development Report describes as a “middle-income trap.” A vital component of this predicament is an average annual growth rate hovering around 3% for nearly 30 years, compared to China’s average of 8.86% and Singapore’s 6.18%.

Roughly 531 miles north of Phuket, Thailand’s capital city of Bangkok is crafting a far more promising narrative. Modern skyscrapers, luxury hotels, high-end shopping centers, and world-class restaurants abound. Importantly, strides made by Thailand’s robust technology sector increasingly mirror Phuket’s. Over the past year, investment in artificial intelligence, data analytics, cloud computing, and cybersecurity, for instance, has contributed to the sector’s 12.8% growth rate. In October, Bloomberg reported that Nvidia Corp. plans to invest heavily in Thailand, joining Alphabet Inc. and Microsoft in building data centers and component manufacturing plants here.

Thailand 4.0 is designed to build on previous economic development plans, which focused on agriculture (Thailand 1.0), light industry (2.0), and heavy industry (3.0). Expanding and leveraging Thailand’s thriving tech sector to help fuel growth and opportunity across the rest of the economy means digital identity isn’t just a nice-to-have—it’s an imperative.

Why Digital Transformation Requires Trusted Identity Proofing

Put simply, digital identity is the electronic representation of an individual’s credentials used for identity verification and proofing. Think of it as your passport, driver’s license, and bank card rolled into one secure, digitized framework verified by cross-referencing government-issued, physical world credentials. For individuals, using physical credentials to make purchases, manage finances, or receive entitlements in person is a relatively simple proposition. Doing the same in digital channels through authentication based on usernames and passwords is another thing entirely—one that has failed miserably.

Thanks to never-ending phishing attacks and corporate data breaches, the login credentials and personal identity files of billions of individuals worldwide have been compromised and made available to cybercriminals and threat actors on the Dark Web. In 2024 alone, nearly 3 billion people had their personal information stolen during a cyberattack targeting data broker National Public Data (NPD). This includes what some believe to be the Social Security Number for every US citizen. This past summer, a tranche of more than 10 billion login credentials were discovered in an online hacker forum.

Cyber thieves and other threat actors leverage this information to defraud individuals, businesses, and governments. They can siphon funds from bank accounts, apply for loans or credit cards, access government benefits, and more. They can also infiltrate corporate and government networks to breach data they can monetize downstream—sometimes with implications for critical infrastructure and national security. According to TransUnion, the number of successful data breaches jumped 15% last year. Worldwide, the price tag for such attacks is projected to top $9.5 trillion annually.

Unfortunately, that projection may prove naive. Today, new forms of AI increasingly enable threat actors of all stripes to enhance the effectiveness and scale of their operations. This is material in Southeast Asia, where dense populations and significant socioeconomic stratification make countries in the region prime targets for AI-enabled attacks. It also doesn’t help that Thailand has been home to what the FBI calls the world’s largest cybercrime network. But a growing number of governments here and around the world view digital identity as critical to mitigating these threats.

ThaID & Beyond: How Digital Identity Is Taking Shape in Thailand

The ability to facilitate fast, secure interactions and transactions is foundational to every digital economy, including Thailand’s. However, it requires a universally accepted form of identity proofing that protects privacy and prevents personal identity data from being stolen and exploited by others.

Compared to Belgium’s itsme, Singapore’s SingPass, or even India’s Aadhaar system, Thailand’s digital identity initiative is still in its early stages. But it’s catching up. The country’s focus on mobile-based identity verification, a key element of digital identity, is supported by its extensive 5G mobile broadband network—among the first deployed in Southeast Asia. The initiative also benefits from a tech-savvy citizenry. Fifty percent of the population is expected to have a mobile broadband subscription by 2025, while overall Internet penetration exceeds 88%.

Rather than developing a government-run digital identity system, however, Thai officials have opted to forge public-private partnerships within a digital identity ecosystem linking service and identity providers (IDPs). So far, some of the most prominent forms of digital identity include the following:

ThaID
Launched by the Department of Provincial Administration (DOPA) in 2023, the ThaID mobile app simplifies access to services requiring identity confirmation in both the public and private sectors. For example, ThaiID facilitates access to government services such as public health care, vehicle registration, and online tax payment without requiring additional data entry. NDID: The National Digital Identity Platform
This blockchain-based infrastructure is designed primarily to address digital Know Your Customer (KYC) mandates within banking and financial services. It’s intended to “enhance digital security to facilitate online transactions and enable wider access to banking and lending” via the user’s preferred mobile banking app. MNID: Mobile Network ID
Operated by participating telcos, the MNID system serves its mobile customers to facilitate identity verification and authentication.

These and other biometrics-based applications are designed to secure online transactions and prevent fraud. And they’re buoyed by regional collaborations like the ASEAN Digital Economy Framework, which seeks to standardize cross-border digital identity recognition. But there are hurdles. Unlike digital identity initiatives in Singapore and Estonia, where privacy concerns have been addressed through robust governance frameworks, Thailand’s initiative faces public trust issues and the fear of data misuse. Enhanced regulation and a surprising financial incentive may change that.

Tang Rat: Stimulus and a Step Toward Self-Sovereign Identity

One of the critical benefits of Thailand’s digital identity initiatives is convenience. Once registered, citizens don’t need to enter additional information when accessing services or manage multiple usernames and passwords—and biometric authentication adds an extra layer of security.

But a series of public sector data breaches, like the one that compromised the personal identity information of more than 55 million Thais earlier this year, threatens to erode trust in e-government initiatives like Thailand 4.0. Downloads of ThaiID and a new digital wallet within a super app called Tang Rat—which require submission of sensitive personal information such as the back of the national ID card and a unique set of codes for making digital transactions—have been tepid. Only 1 in 5 Internet users in Thailand have downloaded either of these apps. There’s no telling how many have uninstalled them.

Stepped-up regulatory mandates on data breaches and cross-border data sharing, and steep fines for non-compliance, are meant to stem concerns and incentivize stronger protections. Moreover, a significant benefit of digital wallets and their blockchain-based architectures is the use of globally unique identifiers that give users a cryptographically verifiable, decentralized digital identity. This approach sets the stage for self-sovereign identity (SSI), where authenticating users no longer requires personal data to be stored centrally on bank, government, or retail servers where it can be hacked. Instead, users can control what personal information they share, how it’s used, and for how long.

Then there’s that longer-term objective of Thailand 4.0. To accelerate adoption and help juice the economy, the Thai government is spending US$14 billion to preload digital wallets with US$300 in spending money for each person who downloads one.

What Should Come Next

This kind of incentive aside, I applaud Thailand’s digital identity initiative and the country’s embrace of digital wallets. In my view, digital identity’s success is predicated on distributed technologies and the architectural advantages they offer. This is especially crucial given the country’s ecosystem approach to digital identity. If deployed well, these technologies augur a day when someone applying for a car loan can choose which if any personal information to share, instead of opening their entire financial lives to a lender or dealer financing department.

It also means they could one day share third-party trust scores that allow them to demonstrate creditworthiness without revealing any personal information at all. Also promising: Thailand’s adoption of liveness tests during authentication of certain services.

But I do have one rather urgent piece of advice. To be most effective, the Thai government and its ecosystem partners would be wise to implement NIST-, FIDO2-, and ISO-type biometrics-based standards for its digital identity infrastructure and any associated liveness tests. Only then will they be able to defeat virtually any attempt at identity spoofing. And yes, if they were to seek my advice about the ideal setting for testing these technologies, my immediate response would be Phuket.

Interested in digital identity-based authentication but aren’t sure where to start? Learn more about 1Kosmos BlockID, the only NIST-, FIDO2-, and iBeta biometrics-certified digital identity platform—and schedule a free demo today.

The post Digital Identity Spotlight: Thailand appeared first on 1Kosmos.


IDnow

EUDI Wallets: Balancing privacy with usability.

Our Senior Architect, Sebastian Elfors recently participated in a panel discussion on the challenges of balancing privacy with usability when developing the EUDI Wallet. Here he shares his thoughts and concerns. As the co-author of the ETSI TR 119 476 ‘Analysis of selective disclosure and zero-knowledge proofs applied to Electronic Attestation of Attributes,’ I was recently […]
Our Senior Architect, Sebastian Elfors recently participated in a panel discussion on the challenges of balancing privacy with usability when developing the EUDI Wallet. Here he shares his thoughts and concerns.

As the co-author of the ETSI TR 119 476 ‘Analysis of selective disclosure and zero-knowledge proofs applied to Electronic Attestation of Attributes,’ I was recently invited to attend the ‘How far should privacy go? Privacy versus Usability’ panel discussion during October’s EU Digital Identity Wallets Forum in Spielfeld’s Digital Hub in the heart of Berlin. 

At the panel I was joined by panelists Steffen Schwalm, Principal Consultant at MSG, Mirko Mollik, Identity Architect at SPRIN-D, and Philippe Rixhon, Chair of the Management Board at Valunode OU; the hour-long panel was moderated by Michal Tabor, partner at Obserwatorium.biz. 

Throughout the lively and robust discussion, the panel debated and exchanged opinions on various matters, but there was one topic that panelists were in complete consensus early on: that user privacy would be essential when EUDI Wallets are rolled out across Europe in the coming years.  

The panel also agreed that the eIDAS 2.0 regulation contains the relevant articles and recitals that cater for mandatory selective disclosure and unlinkability when the EUDI Wallets are used to present electronic attributes. Simply put, the concept of selective disclosure allows a user to present a minimum of personal information to a verifier. The classic example is to prove that you are of legal drinking age when entering a bar, without revealing any more personal information than just your age. The principle of verifier unlinkability means that one or more verifiers cannot collude to determine if the selectively disclosed attributes describe the same identity subject. 

Assessing what has come before.

Earlier this year, I was appointed to co-author the European Telecommunications Standards Institute (ETSI) report ETSI TR 119 476, which provided a comprehensive overview of existing cryptographic schemes for selective disclosure, unlinkability and zero-knowledge proofs (ZKP). It also gives recommendations of data formats and protocols that are suitable for selective disclosure with the EUDI Wallet. 

 Similarly, the Architecture Reference and Framework (ARF) specifies the ISO mobile driving license (mDL) MSO and IETF SD-JWT VC as credential formats for selective disclosure, which are the same formats as proposed in the ETSI report. The ISO mDL MSO is a selective disclosure standard based on ‘salted hashes’ of attributes, which are CBOR encoded and signed by the issuer. Likewise, the SD-JWT also contains salted hashes of attributes, which are JSON encoded and signed by the issuer. As such, I believe the ETSI report and ARF are aligned with respect to credential formats. 

As the ISO mDL MSO and SD-JWT are digitally signed with cryptographic algorithms approved by SOG-IS (Senior Officials Group Information Systems Security), they can therefore be used by the EU public sector. The drawback is that ISO mDL MSOs and SD-JWTs must be issued batchwise to the EUDI Wallets to cater for verifier unlinkability, which adds an operational cost for the Qualified Trust Service Providers (QTSPs) and the PID Providers. 

There is, however, also an eIDAS 2.0 article that allows EU Member States to implement more innovative ZKPs on a voluntary basis. By using a ZKP scheme, the user can prove that a given statement is true, while not providing any additional information apart from the fact that the statement is true. 

 The more advanced ZKP schemes, such as BBS+ (named after its creators Boneh, Boyen, and Shacham) and zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge), have the advantages of providing full unlinkability and dynamic predicates, without the additional cost of issuing batches of credentials. There are academic research projects, such as the Cinderella project, which have implemented zk-SNARKs to “pick out” certain elements of a classic X.509 certificate or an ICAO eMRTD (electronic Machine Readable Travel Document according to the International Civil Aviation Organization standard, such as electronic passports), and shared those attributes with a verifier. This approach is also getting some interest from ISO/IEC, which may apply it on a standard for selective disclosure of the ISO mDL attributes. 

Certainly, these ZKP schemes need to be standardized before being considered for the EUDI Wallet. The IETF (Internet Engineering Task Force) CFRG (Crypto Forum Research Group) and ISO/IEC (PWI 24843 and CD 27565) are in the process of standardizing BBS+, which may result in BBS+ being referenced by a future version of the ARF.

The challenges of building an EUDI Wallet ecosystem. 

Privacy is clearly a complex topic when it comes to the ZKP protocols and related standards that need to be considered for the EUDI Wallet. When it comes to building a complete EUDI Wallet ecosystem, there are even further complexities: 

The eIDAS2 Relying Parties will be registered for specific use cases.  The QTSPs can issue Qualified Electronic Attestations of Attributes (Q)EAAs) with embedded disclosure policies, which restricts the use of how the EUDI Wallets can share the (Q)EAAs with Relying Parties.  The EUDI Wallets will implement access control rights, according to a new CEN TC224 draft standard.  Last but not least, the users must give their consent to share the (Q)EAAs or PIDs with Relying Parties. 

All of this creates a significant user experience challenge for the EUDI Wallet ecosystem, which will require it to be designed and tested thoroughly. 

Of course, an important topic when it comes to the EUDI Wallet is transactions. The panelists exchanged ideas on how QTSPs will be able to invoice the Relying Parties for (Q)EAA transactions, in case the QTSP is not notified about how the EUDI Wallet is sharing the (Q)EAAs. In other words, how can a QTSP invoice the Relying Parties without knowing who they are? 

There are a few potential solutions to this problem. The first is to count and share each EUDI Wallet Provider’s aggregated and anonymized statistics with the QTSPs. A second option could be to insert payment terms in the (Q)EAAs with embedded disclosure policies, which the Relying Parties must accept before processing the (Q)EAAs. A third option could be to extend the OpenID for Verifiable Presentations (OID4VP) with parameters to check for agreements between the QTSPs and Relying Parties. The OID4VP protocol will be used by the EUDI Wallets for presenting PIDs and (Q)EAAs to the Relying Parties, so it could make sense to extend this protocol to make an a-priori “check” with the Relying Party that there is an agreement in place, prior to sharing the (Q)EAAs. 

Given the complexity of the EUDI Wallet ZKP protocols, the challenges in creating an ecosystem of QTSPs and Relying Parties that is also a viable business model, we agreed that discussions need to be ongoing. These topics should preferably be considered by the policy makers in the EU Commission DG-CNCT. The EUDI Large Scale Pilots, which are currently underway, should also be encouraged to test the complex scenarios described above. 

Considering how important the EUDI Wallet will be to identity management in Europe, it is fundamental for the entire eIDAS 2.0 community to resolve these issues prior to the EUDI Wallets being rolled out at scale in Europe the coming years.

By

Sebastian Elfors
Senior Architect
Connect with Sebastian on LinkedIn


Indicio

Biometrics and Verifiable Credential pioneer Indicio launches “Bring Your Own Biometrics” Verifiable Credential solution to solve biometric fraud

The post Biometrics and Verifiable Credential pioneer Indicio launches “Bring Your Own Biometrics” Verifiable Credential solution to solve biometric fraud appeared first on Indicio.
Indicio’s market-changing solution gives people control over their biometric data, removes the need for centralized storage, and solves the challenge of generative-AI identity fraud, all while delivering the simplicity, privacy, and security that everyone needs to feel confident in biometric authentication. No need to abandon biometric systems, BYOB-VC can be added as a layer for rapid digital transformation. 

Today, Indicio announces the launch of its groundbreaking solution to the risks and challenges of biometric authentication, BYOB-VC solution: Bring Your Own Biometrics using Verifiable Credentials.

BYOB-VC is a simple, easy-to-implement way for enterprises or governments to authenticate portable biometric data without having to store it.

Simply give people their biometrics in a Verifiable Credential (as part of an identity assurance process) and require them to present the biometric template in the VC (held in a digital wallet on their mobile device) when they do a liveness check. Verification software compares the live biometric with the authenticated biometric in the credential.

This radically simplifies biometric authentication — and provides a simple, intuitive, and powerful way to bypass the risk of AI-generated deepfakes.

BYOB-VC was developed by Indicio for pre-authorized travel and seamless border crossing and is in use in Digital Travel Credential solutions. Now, it is available in an easy-to-implement form for any organization reliant on biometrics for authentication and access management.

Global surveys show public are alarmed over biometric security and privacy

BYOB-VC addresses deep public concerns over biometric authentication. The recent International Air Transport Association (IATA) Global Passenger Survey 2024 found that a majority of airline passengers are worried about biometric data breaches and how their biometric data is being used.

A global consumer survey by mobile payment platform Jumio found that 72 percent of respondents are concerned on a daily basis that they may lose money or sensitive data to a deepfake.

And a 2024 survey by GetApp found that only 5 percent of consumers believed that their biometric data was secure.

Giving people control of their biometric data and the ability to consent to share that data, as BYOB-VC does, is a critical step to reassuring the public and governments over the safety of biometric processes. It meets the demands of the  European Union’s Data Protection Board, which stipulates that “individuals should have maximum control over their own biometric data.”

By combining a liveness check with the cryptographic, tamper-proof verifiability of Verifiable Credential technology, BYOB-VC is the most powerful multi factor authentication available for biometrics — and it can be enhanced to meet the most critical security requirements by easily combining other Verifiable Credentials — such as a government-issued ID — to the authentication process.

Benefits

Portable trust

You can prove the source of the Verifiable Credential and that the biometric data in the credential hasn’t been altered or faked. You can prove that the credential is bound to the person presenting it.

Bypasses generative AI deepfakes

Biometric authentication is a quick, two-step process: the person presenting themselves for a biometric scan also presents their authenticated biometric template in a Verifiable Credential from their digital wallet. Verification software compares the two and they have to match. There are multiple layers of biometrics, cryptography, and other security binding the credential to the wallet and the wallet to the device and the device to the person.

Faster, flexible, and simpler biometric management

No centralized biometric storage. BYOB-VC removes the complexity around biometric systems. There’s no need to worry about them going offline or protecting against data breaches — because there’s no data to access! Verification software is simple and mobile, allowing you to take advantage of portable, trustable biometric authentication.

Makes data privacy compliance much easier 

By enabling people to store their own biometric data you’ve not only solved the security risk of centralized storage, you’ve solved the compliance challenge of centralized storage and data minimization.

Addresses critical public concerns over biometric data
With generative AI being used in ever more elaborate scams, BYOB-VC provides robust reassurance, not only that their data can’t be stolen but that it can’t be used in ways they aren’t aware or approve of. The IATA Global Passenger Survey found that 39 percent of people would reconsider using biometrics if they were reassured about their privacy.

Why the future of biometric authentication needs to be decentralized

Biometrics have emerged as a powerful, frictionless way to authenticate identity. They are better than username and password-based authentication because they can’t be forgotten, don’t need to be reset, and — in the case of an iris — are unique to an individual.

But as biometrics have proliferated as a method to access systems, the upside of their uniqueness has revealed a precipitous downside. Biometrics need to be stored in a database so that the verifying party can compare the scan of a person presenting themselves for a biometric scan with a stored copy of their biometrics. If they match, the person is authenticated.

This centralized storage means they are at risk of being stolen in a data breach, and when this happens, people cannot reset their fingerprints, faces, or irises.

And if this wasn’t a big enough existential problem, the rapid rise of generative AI has made it astonishingly easy to convincingly fake biometric data.

Entrust Cybersecurity reported a 3000% increase in deepfake attempts between 2022 and 2023, while Deloitte’s Center for Financial Services is predicting AI-generated “fraud losses to reach US$40 billion in the United States by 2027, from US$12.3 billion in 2023, a compound annual growth rate of 32%.”

So far, typical responses  range from “be more vigilant about security” to “don’t post detailed pictures of yourself online,” to “we need an AI solution to detect AI fakes.”

So simple, so fast, so cost effective

BYOB-VC is a simple way around both wishful thinking and an AI arms race, as it leverages the revolution in decentralized digital identity. Here’s how it works.

When a person has their biometric data first scanned as part of identity assurance, the data is digitally signed and issued to them in a Verifiable Credential that they hold on their mobile device.

Verifiable Credentials have three powerful features:

1 The source of the credential can be proved using cryptography.

If someone tries to manipulate the data in a credential, they break the credential. The credential is cryptographically bound to the person and their device.

By rendering the biometric template taken during identity assurance in the form of a Verifiable Credential, any organization can authenticate it using simple verifier software. The source of the credential is authenticated, the integrity of the template data is authenticated, and finally, the template data is compared with the live biometric scan, all in one seamless process.

BYOB-VC also bypasses the problem of deepfakes. Rather than just rely on a still or moving image, or a voice, you also ask for cryptographic proof of that same data created by a trusted issuer. And if you need further proof, ask them to add other Verifiable Credentials to their presentation, multiplying the layers of cryptographic proof and credential binding.

In use by Indicio customers and now widely available

BYOB-VC was pioneered by Indicio for use in travel, where a passport’s biometric data is compared with a liveness check and then issued as a Verifiable Credential following the International Civil Aviation Organization’s standards for Digital Travel Credentials. This enables travelers to use a Verifiable Credential for pre-authorized travel and seamless border crossing. Acuity Market Research’s The Prism Project described our biometric solution as “masterful.”

Now, Indicio’s masterful approach and technology is available to any company, organization,  industry or sector that wants a simple, powerful solution to managing biometric authentication.

Learn more about Biometric Authentication through Verifiable Credentials on Indicio’s website, or if you have specific questions you can get in touch with our team of experts.

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Biometrics and Verifiable Credential pioneer Indicio launches “Bring Your Own Biometrics” Verifiable Credential solution to solve biometric fraud appeared first on Indicio.


IDnow

Sealing the deal: IDnow Trust Services AB becomes Europe’s newest QTSP.

It’s official: IDnow Trust Services AB is now certified as a Qualified Trust Service Provider (QTSP) in the EU. We sat down with the Chief Executive Officer of IDnow Trust Services AB, Johannes Leser and Registration Officer of IDnow Trust Services AB, Uwe Pfizenmaier to learn more. In early 2024, IDnow began a joint venture […]
It’s official: IDnow Trust Services AB is now certified as a Qualified Trust Service Provider (QTSP) in the EU. We sat down with the Chief Executive Officer of IDnow Trust Services AB, Johannes Leser and Registration Officer of IDnow Trust Services AB, Uwe Pfizenmaier to learn more.

In early 2024, IDnow began a joint venture with system integrator and technology provider, ESYSCO to establish the newly formed QTSP, IDnow Trust Services AB. In October it was officially approved by PTS, the Swedish supervisory body, and is now listed as a QTSP on the eIDAS Dashboard by the European Commission.  

This significant milestone allows IDnow to offer a wide range of eIDAS-compliant digital signing solutions and ultimately offer trust services to our customers. For more information, check out our interview with Uwe and Johannes below. 

For those who may not be familiar with the term, what exactly is a QTSP?  

Johannes: A Qualified Trust Service Provider, or QTSP for short, is an entity that can create one or more trust services, such as electronic signatures, electronic time stamps, electronic seals or certificates in a qualified manner. What differentiates a QTSP from a Trust Service Provider is that it operates under stricter measurements and requirements as defined by the Electronic Identification and Trust Services (eIDAS), is independently assessed in regular audits by a conformity assessment body (CAB) and is required to have insurance due to reversed burden of proof in case of any disputes.  

Uwe: By using a QTSP, businesses benefit from an extra layer of security knowing that the products they choose are officially certified and audited by a higher authority. Although qualified trust services may or may not be required depending on the type of security an organization needs and the requirements of the country in which it operates, by choosing to do business with a QTSP, a higher level of confidence in security is achieved.  

What services will IDnow Trust Services AB offer? 

Johannes: As a QTSP, IDnow Trust Services AB can provide the following for now: 

Issue, validate and manage qualified electronic certificates for signatures and seals.  Deliver additional services such as qualified time stamps.  Persist identification evidence data.  Execute certificate revocation. Why will QTSPs be so important in the future of the digital signature market? 

Uwe: 72% of organizations in Europe still use a mix of paper and electronic documents. Despite this, the trend toward a fully digital signing process is just around the corner. In fact, the European digital signature market is predicted to be 7x times larger by 2030.  

As QTSPs are verified services under strict eIDAS regulations and requirements, they guarantee their customers a significant level of trust and security to adopt new solutions like digital signatures. Before becoming a QTSP, the entity is required to undergo rigorous and independent assessment as well as regular audits to ensure they remain compliant. As such, QTSPs offer greater legal certainty and higher security for electronic transactions and meet the same level of trust as paper documents. 

Expert guide to digital signatures. Download to discover: The different types of digital signatures Benefits of implementing a digital signature solution How IDnow can help unlock valuable business opportunities Read now What did the process of becoming a QTSP entail, specifically in relation to regulatory requirements? 

Johannes: To become a QTSP, a full understanding of the eIDAS regulation is crucial. eIDAS offers a uniform framework of guidelines to allow completely digital and legally secure cross-border contracts within the EU. It also defines the process and technology behind different types of services such as signatures, seals, time stamps, etc.  

Uwe: In order to qualify as a QTSP, the entity must ensure all legal and regulatory obligations are met, such as data protection and privacy requirements. Once established, the eIDAS assessment process is initiated with a CAB and an audit is carried out. After successfully passing the audit, a QTSP application is submitted with a supervisory body. Upon acceptance, the QTSP is published on the EU Trust List. 

What sets IDnow Trust Services AB apart from other QTSPs?  

Johannes: IDnow Trust Services AB is the first QTSP to offer SMS-free signing for digital contract signing. During the average digital signing process, users receive a One-Time Password (OTP) code that must be entered to authenticate the transaction. This step usually causes friction for users and companies, leading to 22% of drop offs coming from the OTP identification.  

SMS-free signing dramatically simplifies the signing process, eliminating the heavy-friction requirement of OTP codes and driving higher conversion rates. Plus, by eliminating the SMS step, fraud and operational risk is significantly reduced. 

What advantages does the creation of IDnow Trust Services AB offer to IDnow customers?  

Uwe: The combination of IDnow’s leading identity verification expertise and IDnow Trust Services AB’s advanced trust services will deliver unmatched value and secure, yet agile, solutions, including Qualified Electronic Signatures to future-proof businesses in a rapidly changing regulatory landscape. 

Key benefits include being able to easily navigate complex regulations like AMLD 6 and eIDAS. As electronic certificates are legally binding and dispute-protected, it can also help to reduce the risks of digital transactions in the EU. Plus, due to the Europe-wide validity of trust services, customers can now leverage IDnow’s pan-European approach to provide seamless, consistent services for cross-border growth.  

Johannes: By combining identity verification services with secure trust services, IDnow not only creates optimized processes, but offers unparalleled reliability and boosts confidence and trust in every transaction. 

As customers can perform identity verification and trust services from a single, unified and simplified process, they can benefit from streamlined procurement and contractual simplicity. 

What does the future look like for IDnow Trust Services AB? 

Uwe: As an eIDAS-certified QTSP, the outlook is very bright. The sky is the limit, especially regarding innovation. In the future, we hope to expand our product offerings and features as well as certifications.  

Johannes: In 2025, our plan is to equip more products with our QTSP features and explore new business use cases. Additionally, we plan on achieving another certification based on an audit that will support the forthcoming EUDI Wallet. Lastly, we plan to offer future-proof services such as QEAA (Qualified Electronic Attestation of Attributes) and advanced preservation solutions, all without sacrificing regulatory compliance. We are looking forward to the upcoming year and the many innovations we plan for our customers! 

Learn more about our range of digital signature solutions here

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn


Ockto

Risicopartijen als partners in financiële innovatie

In streng gereguleerde marketen, zoals de financiële sector, zijn innovatie en flexibiliteit essentieel om concurrerend te blijven. Daarbij brengen strikte regelgeving en hoge compliance-eisen unieke uitdagingen met zich mee. Door vanaf het begin samen te werken met risicopartijen – zoals Legal, Compliance en Risk – kunnen organisaties de weg vrijmaken voor snellere en soepelere innovat

In streng gereguleerde marketen, zoals de financiële sector, zijn innovatie en flexibiliteit essentieel om concurrerend te blijven. Daarbij brengen strikte regelgeving en hoge compliance-eisen unieke uitdagingen met zich mee. Door vanaf het begin samen te werken met risicopartijen – zoals Legal, Compliance en Risk – kunnen organisaties de weg vrijmaken voor snellere en soepelere innovatietrajecten.


Innoveren in een zwaar gereguleerde sector - Jordy Stoelwinder & Hidde Koning - Data Sharing Podcast

In deze aflevering van de Data Sharing Podcast ontvangt host Hidde Koning Jordy Stoelwinder als gast. Jordy is werkzaam bij Vista Hypotheken als productmanager digitalisering en brondata. Eerder deed hij al ervaring op in dit gebied bij NHG en ING. Samen verdiepen zij zich in de uitdagingen rondom innovatie binnen een sterk gereguleerde sector als de hypotheeksector.

In deze aflevering van de Data Sharing Podcast ontvangt host Hidde Koning Jordy Stoelwinder als gast. Jordy is werkzaam bij Vista Hypotheken als productmanager digitalisering en brondata. Eerder deed hij al ervaring op in dit gebied bij NHG en ING. Samen verdiepen zij zich in de uitdagingen rondom innovatie binnen een sterk gereguleerde sector als de hypotheeksector.


IDnow

IDnow Trust Services AB certified as a Qualified Trust Service Provider in the European Union

IDnow Trust Services AB certified as a Qualified Trust Service Provider in the European Union Munich, November 5, 2024 – IDnow, a leading identity verification platform provider in Europe, announces its partnership with newly founded IDnow Trust Services AB, a certified Qualified Trust Service Provider (QTSP) under EU Regulation 910/2014 (eIDAS).[1] Founded as a joint […]
IDnow Trust Services AB certified as a Qualified Trust Service Provider in the European Union

Munich, November 5, 2024 – IDnow, a leading identity verification platform provider in Europe, announces its partnership with newly founded IDnow Trust Services AB, a certified Qualified Trust Service Provider (QTSP) under EU Regulation 910/2014 (eIDAS).[1] Founded as a joint venture in Stockholm in early 2024 between IDnow and ESYSCO, a system integrator and technology provider, the company offers qualified trust services, such as electronic signatures, time stamps, and seals, that combine security, compliance, and user convenience.

Innovation and leadership in the digital signature market

As a recognized QTSP in the EU by the Swedish supervisory body Post-och telestyrelsen (PTS), IDnow Trust Services AB will issue, validate, and manage electronic certificates and time stamps; capture additional information, such as qualified time; hold identification evidence data, and perform certificate revocation, while complying as a Certificate Authority (CA). The QTSP provides assurance of the existence of specific electronic data at a specific time, such as proof that documents have been submitted for processing.

One of the features that IDnow Trust Services AB will immediately enable for IDnow’s customers is SMS-free signing. This certified capability simplifies the signing process, eliminating the requirement of One-Time Password (OTP) codes and driving higher conversion rates. IDnow Trust Services AB is the first QTSP that will allow this new user authentication process, which is already acknowledged by different CEN and ETSI standards and which will revolutionize the user experience in the digital signature market.  

New joint venture secures trust and simplifies compliance

“We are incredibly pleased that our joint venture, IDnow Trust Services AB, is already bearing the fruits of our labor. At IDnow, we have long made it our mission to actively shape and lead the Know Your Customer and digital identity industry; we are now once again showing this leadership role by doubling down on trust services, as they are an essential part of the transformation of the digital identity market heralded by eIDAS 2.0”, says Andreas Bodczek, CEO of IDnow.

He continues: “In the coming years, our customers will benefit from the synergy of identity verification and qualified trust services, ensuring a compliant and efficient experience for all business-critical operations across the EU. This collaboration sets a new standard for trust and operational efficiency, positioning businesses for long-term success in the fast-evolving digital landscape”.  

Johannes Leser, CEO of IDnow Trust Services AB, adds: “Trust and liability is the backbone of all business, and it will be the driving force behind the global digital economy. IDnow Trust Services AB is committed to delivering innovative and highly dependable solutions to IDnow, its customers, and partners. With trust as our mutual foundation, we’re poised to revolutionize the European digital signature market, which is expected to be seven times larger by 2030 than it is today.”

[1] The electronic Identification and Signature (eIDAS) regulation defines a QTSP as a natural or a legal person who provides one or more qualified trust services.


Holochain

Mobile Holochain Applications Shipped!

Holochain in Your Hand

Volla has shipped their new Quintus smartphone with a Holochain-based app pre-installed.

I repeat, TL;DR: you can have a phone with a native Holochain app on it today.

That’s it. That’s the key takeaway of this article. Details below.

Volla Quintus

The Volla Quintus, a privacy-first smartphone, just began shipping and customers will be receiving their devices in the coming days. This phone runs both custom Android and Ubuntu Touch software, for a “Google-free” experience. Designed as an alternative to the surveillance-focused tech giants, Volla’s phones provide a realistic option for opt-out.

The Quintus is Volla’s most recent model, but they have been producing user-centered phones since 2020. They are dedicated to a distraction-free user experience, with interface tools like their Springboard which is a search-first launcher that allows you to interact with your applications without the overwhelming attention traps of the applications, notifications, and socials pushed on other platforms.

The App(s)

The Holochain-based Volla Messages is shipping with the Quintus in a beta version. While the front end of the app is relatively unremarkable (it’s a chat app where you can message your contacts one-on-one or create groups), the back end is something totally new. 

Volla Messages uses Holochain for its networking and data storage, bypassing the need for central servers. The smartphones are networked together into their own cloud, with your data encrypted and held amongst your peers. 

Volla Messages also uses Holochain’s membrane proof feature, limiting spam by requiring you to consent to join a particular chat that you’ve been added to. Web3 doesn’t need to mean a free-for-all; it can mean privacy, security, and agency. You should only be in the chats you want to be in.

In the coming month, Volla will be releasing a second Holochain-based app. Volla Recovery is a personal cloud app that allows you to backup the data from your phone without relying on a company’s servers, and without subjecting you to anyone’s data policies. Instead you can encrypt and backup your data among your peers, providing a seamless user experience alongside the privacy and security that your data deserves. 

Why did Volla Choose Holochain?

Volla built on Holochain because we provide scalable cloud-style apps without central servers. They are thinking about user privacy through the complete stack, from hardware, to software, to cloud services and edge computing. Holochain is a critical piece for this.

Here is what they have to say about it:

The big picture of Volla is a secure and independent communication infrastructure. A smartphone is an elementary component. The cloud is another important element. The only way to prevent external influence is distributed, highly encrypted edge computing.

—Dr. Jörg Wurzer, founder of Volla Phone
How to Access

Volla Messages isn’t exclusive to Volla devices. While the download links aren’t live yet, any Android user can download and use this software. After an upcoming revamp of the Volla website you should be able to download the beta Volla Messages app for your Android device. Eventually they’ll push the stable version of the app out to the major app stores so everyone can access this truly peer-to-peer tech.

Buy the Phone Now

Want a privacy-first phone? You can buy the Volla Quintus now. For those of you in the EU, use our discount code HOLOCHAIN10 on the Volla site. If you are outside the EU, then you can access a discount through this special link to their Indiegogo.

Monday, 04. November 2024

Ocean Protocol

Season 7 of the Ocean Zealy Community Campaign!

We’re happy to announce Season 7 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members. 💰 Reward Pool 5,000 ($FET) tokens that will be rewarded to the Top100 users in our leaderboard 🚀 📜Program Structure Season 7 of the Ocean Zealy Community Campaign will feature more engaging tasks and ac

We’re happy to announce Season 7 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members.

💰 Reward Pool

5,000 ($FET) tokens that will be rewarded to the Top100 users in our leaderboard 🚀

📜Program Structure

Season 7 of the Ocean Zealy Community Campaign will feature more engaging tasks and activities, providing participants with opportunities to earn points. From onboarding tasks to Twitter engagement and content creation, there’s something for everyone to get involved in and earn points and rewards along the way.

⏰Campaign Duration: 4th of November — 29th of November 12:00 PM UTC

🤔How Can You Participate?

Follow this link to join and earn:

https://zealy.io/cw/onceaprotocol/questboard

Season 7 of the Ocean Zealy Community Campaign! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

5 takeaways from the ‘Sign of the times: The digital signature revolution’ webinar.

Industry experts gathered to discuss the past, present and future of digital signatures and their ever-increasing role in financial services.   From Sumerian tablets to digital documents, signatures have been used for thousands of years as symbols of trust, proof of identity and indicators of agreement.   Nowadays, digital signatures do all the above and offer enhanced […]
Industry experts gathered to discuss the past, present and future of digital signatures and their ever-increasing role in financial services.  

From Sumerian tablets to digital documents, signatures have been used for thousands of years as symbols of trust, proof of identity and indicators of agreement.  

Nowadays, digital signatures do all the above and offer enhanced security and efficiency. Little wonder then that the European digital signature market is predicted to be 7x times larger by 2030.  

To explore how digital signature solutions are changing the way we do business, we organized the ‘Sign of the times: The digital signature revolution’ webinar

Moderated by Ellie Burns, Head of Product & Customer Marketing at IDnow, participants included Uwe Pfizenmaier, Director Product Management VideoIdent/eSign at IDnow, Magali Biron, VP Business Development eSign at Nitro and Julian Groetzbach, NPL & Business Development Manager at TF Bank.  

Now available on-demand, the hour-long webinar covers a variety of topics, from the legal and regulatory landscape of digital signatures to how they can be used to boost conversions. 

Missed the webinar? Here’s our five key takeaways! 

1. Fraud is propelling the adoption of digital signatures.  

There are three different types of digital signatures: Simple Electronic Signatures (SES), Advanced Electronic Signatures (AES) and Qualified Electronic Signatures (QES). As the least regulated, SES is mostly used for internal purposes, while AES and QES are regulated by the European-wide eIDAS standard.  

Almost three quarters of European organizations still use a mixture of paper and electronic documents but increasing numbers of companies are making the digital switch. Up until last year, TF Bank was still using wet signatures to conclude contracts. 

Recent increases in the adoption of digital signatures have been largely driven by the need to onboard more customers, increase conversions and reduce fraud. However, Uwe warned that: 

The benefits of digital signatures, including enhanced trust, speed and security, alongside cost savings and a more sustainable method of concluding contracts, are only made possible through a robust identity verification process.

Uwe Pfizenmaier, Director Product Management VideoIdent/eSign at IDnow

Despite the obvious benefits, some customers are still hesitant to use digital signatures, which is why it’s important for businesses to reassure users that they’re safe and secure and offer improved convenience. IDnow’s InstantSign, for example, only requires users to onboard once. 

Acceptance of new products and technologies can also depend on the maturity of the market, with countries including Germany still hesitant about adopting certain technology

2. The role of regulation in driving adoption. 

Regulations can act as a major accelerator for digital signature adoption. Despite different national laws dictating specific requirements, which can make it difficult to scale and expand, regulations like Electronic Identification, Authentication and Trust Services (eIDAS) can serve to simplify the process.  

Indeed, although compliance requirements may vary across countries, along with accepted types of identification documents, being able to offer as consistent a user experience is vital.  

Contrary to popular belief, the goals of regulators, compliance bodies and businesses are more aligned than people may think. As such, it’s important to embrace solutions that address the needs and challenges of both sides. Integrating a digital signature QES solution can be a great example of this as it not only complies with regulations but also leads to better conversions.  

3. Importance of trust in the digital economy. 

In the ‘offline’ world, customs like shaking hands and looking one another in the eye go some way to establishing trust. 

In financial services, trust is essential, regardless of whether online or off. If a financial institution loses the trust of its customers, it will have a knock-on effect on its reputation and bottom line. Companies must also trust their solution providers so they, in turn, can trust their customers and vice versa. 

Trust is built by being open and offering a good user experience and accessibility, so people want to adopt the solution. That is the foundation of a successful trust process.

Uwe Pfizenmaier, Director Product Management VideoIdent/eSign at IDnow

As digital signatures deal with the exchange of personal data between businesses and users, companies need to treat the data with utmost care, while providing customers with the support they need. 

Expert guide to digital signatures. Learn more about: The different types of digital signatures Benefits of implementing a digital signature solution How IDnow can help unlock valuable business opportunities Download now 4. Impact of eIDAS 2.0 regulations.  

eIDAS 2.0 will address weaknesses in the first iteration of eIDAS, offering better data protection and a more harmonized proof of identity. It will also likely usher in a new era of safer and more secure digital signatures. 

The more forward-thinking financial services players should see new regulations, technology (e.g. artificial intelligence) and solutions (digital wallets) as opportunities to future-proof processes and ultimately increase conversions.  

5. Future of digital signatures. 

To thrive in an ever-changing market, businesses must be quick to adapt to new developments and integrate new technologies to optimize products and processes. 

Implementing technologies like biometrics, machine learning, artificial intelligence and predictive analytics will improve trust and bolster security. 

In markets such as Germany,  eID will play a major role in the future of identification and digital signing, which will assist in the advancement of digital fingerprints. A huge shift is expected for digital identity as more countries prepare for the delivery of a digital ID. Therefore, it is important for companies to partner with solution providers like IDnow to onboard users and facilitate the conclusion of contracts in a user-friendly way.  

However, as always, as companies experiment with technology and launch new processes like biometric checks and AI, they need to be mindful of new and even more inventive fraud attacks. 

Learn more about digital identities in our blog “5 reasons why digital identities will revolutionize business in 2025 and beyond.” 

To learn more about IDnow Trust Services AB and how it will revolutionize the digital signature market by offering greater legal certainty and higher security for electronic transactions, read our interview with Chief Executive Officer of IDnow Trust Services AB, Johannes Leser and Registration Officer of IDnow Trust Services AB, Uwe Pfizenmaier.

Check out other webinar wrap-ups, like ‘5 takeaways from the ‘Why you’re doing remote onboarding wrong’ webinar.

Sign of the times: The digital signature revolution Watch the one-hour webinar now to learn more about the legal and regulatory landscape of digital signatures and how they can be used to boost conversions. Watch now on-demand

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn

Sunday, 03. November 2024

KuppingerCole

Surviving the Cryptocalypse: Quantum Risks and Crypto Agility

In this episode, Matthias and Alexei explore the urgent need for organizations to prepare for the coming age of quantum computing and the potential risks it poses to current cryptographic standards. As quantum technology advances, traditional encryption methods may become vulnerable, putting critical data, transactions, and security at risk. Alexei discusses the concept of crypto agility—the abi

In this episode, Matthias and Alexei explore the urgent need for organizations to prepare for the coming age of quantum computing and the potential risks it poses to current cryptographic standards. As quantum technology advances, traditional encryption methods may become vulnerable, putting critical data, transactions, and security at risk.

Alexei discusses the concept of crypto agility—the ability to quickly adapt cryptographic infrastructure in response to new threats. He shares practical advice on how to assess and update legacy systems, encryption methods, and workflows, including:

Where organizations should begin if they rely heavily on cryptography for critical data and transactions How to evaluate and improve cryptographic infrastructure across digital systems, cloud environments, and hardware The essential role of vendor collaboration and supply chain security in building quantum-safe systems How to prioritize threats like ransomware and crypto-related risks based on industry needs

Alexei also underscores the importance of workforce training, advising that while employees don’t need deep cryptography knowledge, they must understand secure practices and tools approved by their organization’s security policy.



Friday, 01. November 2024

Caribou Digital

To AI or Not to AI? Insights from the Biometrics Institute Congress, 2024

Keren Weitzberg & Aaron Martin AI is on everyone’s lips. So perhaps it’s not surprising that this year’s Biometrics Institute Congress was filled with much hand-wringing about AI. A self-described “non-profit,” the Biometrics Institute is probably best understood as part industry lobbying group, part think tank, part research institute. Each year, it holds a conference in London, which b
Keren Weitzberg & Aaron Martin

AI is on everyone’s lips. So perhaps it’s not surprising that this year’s Biometrics Institute Congress was filled with much hand-wringing about AI. A self-described “non-profit,” the Biometrics Institute is probably best understood as part industry lobbying group, part think tank, part research institute. Each year, it holds a conference in London, which brings together policymakers, vendors, regulators, privacy rights groups, and the occasional not-so-undercover academic (like ourselves).

Here are some of our key insights from this year’s congress:

Biometric vendors are currently debating how to define themselves vis-a-vis AI and how to engage with the current AI hype cycle

“AI” is a notoriously slippery term, so much so that scholars, civil society groups, and policymakers continue to argue over its very definition. Such debates are not purely semantic; rather, they shape how an industry is regulated, how it approaches funders and clients, how it is publicly understood, and how it understands itself. In their annual ‘state of the industry’ report, the Biometric Institute tackles this question, asking “To AI or not to AI”? (We would attempt to summarize the key points for blog readers but alas it is proprietary knowledge that is only available to paid members…) At the Congress, panelists and keynote speakers raised what seemed almost existential and ontological questions. One session, for example, was entitled “What is the relationship between AI and biometrics?”

From one perspective, the relationship between AI and biometrics may seem obvious: AI is expected to enhance the functionality of identity checks. Thanks to machine learning and artificial neural networks, biometric systems have become far more accurate and precise in recent years, improving their technical performance and increasing their spread and market share. More recently, generative AI is posing new risks and vulnerabilities for the sector, including sophisticated forms of synthetic identity fraud, such as face morphing, which has the potential to disrupt the security of the travel sector. OpenAI’s release of its real-time voice API, for example, has renewed alarms about fraudsters circumventing voice recognition software.

But AI is also part of a powerful tech and regulatory imaginary. At the Congress, AI seemed less a technology (or set of technologies) to be adopted than a loaded, polyvalent term to be contended with. Generating a kind of “hyperreality,” AI seems to be everything and nothing at once, invoking both dystopian and utopian futures, producing vociferous proponents, equally vocal detractors, and a growing (if sometimes quieter and more measured) group of skeptics.

The biometrics industry has long been anxious with its public reputation, particularly in the wake of a string of controversies over encoded racism within facial recognition algorithms and in light of mounting resistance to the use of biometric systems for policing, migration control, and state repression. A recent industry survey by the Biometrics Institute revealed concerns that public mistrust with AI would spill over into the sector, further inflaming public opinion: “A significant 80% of respondents believe public opinion on AI will directly impact their views on biometrics. This highlights the need to address public concerns about AI to build trust in biometric applications.” This is one of many reasons why vendors and clients may seek to distance themselves from the AI moniker, or at least carefully navigate how they relate to the capacious and ambiguous term.

Regulators and industry are not necessarily at odds with one another

The Biometrics Institute is a space of engagement — one where regulators and industry can speak productively and where regulators can make a case for compliance. This is in contrast to common understandings about the relationship between regulation and business, whereby the regulators are thought to be adversarial and mistrusting of industry operators.

Several representatives of governmental regulatory bodies were in attendance at this year’s Congress, including the UK’s Information Commissioner’s Office (ICO). Other key public stakeholders in attendance were the EU’s DG CNECT, which oversees the EU’s AI Act, and the Office of Privacy and Civil Liberties at the US Department of Justice. The tone struck in their presentations was one of cooperation and assistance. As John Edwards, UK Information Commissioner, told the audience, “we are on the same side.” Emphasizing that the UK is a space where biometric technologies can flourish, he argued that regulators can help industry: “We want you to be able to use biometric data that add value to society and protect people’s privacy.”

This is not necessarily a story of regulatory capture, but it does speak to the way that industry and regulatory bodies are actively shaping one another. It certainly reflects an increasingly accepted mode of regulation that emphasizes cooperation and highlights the benefits of technological innovation, potentially at the expense of fundamental rights protection.

Rather than restrict biometric use cases, AI regulations may, in the long run, facilitate (and legitimate) their spread

Rather than necessarily inhibiting the biometrics industry, AI regulation can help the sector identify and manage risk, benefiting corporate players. Take, for example, the EU AI Act, which has recently begun to come into force. Several presentations at the Congress were devoted to the new Act and its implications for industry. Irina Orssich, Head of Sector AI Policy at DG CNECT, explained that the Act takes a risk-based approach to biometrics. It divides use cases into a taxonomy of risk — from unacceptable to high-risk to limited to low/minimal risk. Compliance with the EU AI Act and adoption of this taxonomy can be seen as a form of risk mitigation — a means for companies to limit their liability and exposure in ways that will keep regulators at bay. Importantly, however, it also legitimates those applications that are deemed to be less risky according to the rules.

“Besides minimising risks,” notes legal scholar Nathalie Smuha, “regulation could facilitate AI’s uptake, boost legal certainty, and hence also contribute to advancing countries’ position in the…‘race to AI.’” The same could be said about biometrics and how emerging regulations will facilitate their further adoption and acceptance in different contexts, including consumer applications and more security-oriented spaces like borders. It is therefore incumbent upon critical voices to assess how regulators, and the rules they are mandated to enforce, further entrench biometrics in our everyday lives (whether or not they are ultimately understood to be AI) and the implications of this legitimization for our societies and polities.

To AI or Not to AI? Insights from the Biometrics Institute Congress, 2024 was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

KuppingerCole Rising Stars: Spot the Most Innovative Companies in the Market

by Martin Kuppinger For two decades, KuppingerCole Analysts is monitoring the IAM, Digital Identity, and Cybersecurity market. With our Leadership Compass reports on market segments, the Buyer’s Compass providing insights and criteria for decision makers, or the Executive View reports on individual solutions, we already have a range of reports covering the markets in breadth and depth.  No

by Martin Kuppinger

For two decades, KuppingerCole Analysts is monitoring the IAM, Digital Identity, and Cybersecurity market. With our Leadership Compass reports on market segments, the Buyer’s Compass providing insights and criteria for decision makers, or the Executive View reports on individual solutions, we already have a range of reports covering the markets in breadth and depth. 

Now, there is another member of that family of publications: KuppingerCole Rising Stars.

This new report is devoted to innovative vendors, commonly in the startup stage from early startups with initial customers to companies that already have demonstrated their growth potential. 

We talk with hundreds of startups and emerging vendors every year. Now, we will rate them based on a defined set of criteria, with innovativeness and product-market-fit being most relevant, but also looking at their management, organizational structure, and other factors. 

Vendors that pass our defined, ambitious thresholds will earn the “KuppingerCole Rising Star” rating for their strong market potential. 

We believe these reports will be of value for both decision makers in end-user organizations and financial investors, shining a spotlight on vendors that are not yet broadly known and visible but should be observed. For end-user organizations, they might fill gaps or even become an alternative to established vendors. For investors, they are logical targets. 

Stay tuned! 


SelfKey

SingularityDAO Approves Merger With Cogito Finance and SelfKey Following SDAO Community Vote

Gros Islet, St. Lucia, 1st November 2024 - SingularityDAO has concluded a community vote to determine its proposed merger with Cogito Finance and SelfKey. SDAO holders voted overwhelmingly in favour of the merger, enabling SingularityDAO to press ahead with plans to form Singularity Finance, an EVM Layer 2 for tokenising the AI economy.

Gros Islet, St. Lucia, 1st November 2024 - SingularityDAO has concluded a community vote to determine its proposed merger with Cogito Finance and SelfKey. SDAO holders voted overwhelmingly in favour of the merger, enabling SingularityDAO to press ahead with plans to form Singularity Finance, an EVM Layer 2 for tokenising the AI economy.


Tokeny Solutions

Tokeny’s Talent | Jordi’s Story

The post Tokeny’s Talent | Jordi’s Story appeared first on Tokeny.
Jordi Reig is Head of Engineering at Tokeny.  Tell us about yourself!

I’m Jordi Reig and I’m living in a small town near Girona, about 100 km from Barcelona, enjoying nature, tranquility and the simple pleasures of life. I studied Computer Science at university and also hold an MBA. I’ve been working in the technology field for over 20 years, with several of those years in management roles. While I have many hobbies, I particularly enjoy playing football, mountain running and watching sci-fi movies.

What were you doing before Tokeny and what inspired you to join the team?

Well, I was doing something similar at another company, working to create the best possible conditions -whether by increasing well-being, enhancing team dynamics and processes, or improving deliveries- for the teams I managed to thrive.

How would you describe working at Tokeny?

Challenging but rewarding, with open communication, freedom to express opinions and a great environment to develop your abilities.

What are you most passionate about in life?

Enjoy life as much as I can. Life is short, so make the most of it!

What is your ultimate dream?

Live more, work less 😅. And I wish I could catch a glimpse about our world a thousand years from now.

What advice would you give to future Tokeny employees?

Get ready for what’s coming in the world of tokenization because it’s going to be amazing!

What gets you excited about Tokeny’s future?

The countless possibilities our solution can bring to institutions and society, along with the growth it will drive for the company, are immense.

He prefers: check

Coffee

Tea

check

Movie

Book

Work from the office

check

Work from home

Dogs

check

Cats

check

Call

check

Text

check

Burger

Salad

check

Mountains

Ocean

check

Wine

Beer

check

Countryside

City

check

Slack

Emails

check

Casual

Formal

check

Crypto

check

Fiat

Night

check

Morning

More Stories  Tokeny’s Talent|Laurie’s Story 26 January 2023 Tokeny’s Talent|Tony’s Story 18 November 2021 Tokeny’s Talent|Thaddee’s Story 2 June 2022 Tokeny’s Talent | Omobola 25 July 2024 Tokeny’s Talent | Fedor 10 April 2024 Tokeny’s Talent|Radka’s Story 4 May 2022 Tokeny’s Talent | Jordi’s Story 1 November 2024 Tokeny’s Talent|Nida’s Story 15 January 2021 Tokeny’s Talent|Joachim’s Story 23 April 2021 Tokeny’s Talent|José’s Story 19 August 2021 Join Tokeny’s Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Jordi’s Story appeared first on Tokeny.


KuppingerCole

Identity and Access Governance

by Nitish Deshpande This report provides an overview of the Identity and Access Governance market and a compass to help you find a solution that best meets your needs. It examines solutions that provide an integrated set of access governance capabilities for on-premises and SaaS systems. The report provides an assessment of the capabilities of these solutions to meet the needs of all organizations

by Nitish Deshpande

This report provides an overview of the Identity and Access Governance market and a compass to help you find a solution that best meets your needs. It examines solutions that provide an integrated set of access governance capabilities for on-premises and SaaS systems. The report provides an assessment of the capabilities of these solutions to meet the needs of all organizations to monitor, assess, and manage access-related risks such as over-entitlements and SoD (Segregation of Duties) conflicts.

Identity and Access Governance

by Nitish Deshpande Explore Identity and Access Governance solutions for tackling regulatory compliance, security risks, and managing complex hybrid environments efficiently. Learn more about how to select the solution that is right for you in our buyer's guide.

by Nitish Deshpande

Explore Identity and Access Governance solutions for tackling regulatory compliance, security risks, and managing complex hybrid environments efficiently. Learn more about how to select the solution that is right for you in our buyer's guide.

Finema

This Month in Digital Identity — November Edition

This Month in Digital Identity — November Edition Welcome to the November edition of our monthly digital identity series! This month, we’re diving into essential advancements shaping digital identity and the future of secure verification. Discover key updates on the European Digital Identity Wallet, the latest approaches to mobile driver’s license verification, and how deepfake detection is
This Month in Digital Identity — November Edition

Welcome to the November edition of our monthly digital identity series! This month, we’re diving into essential advancements shaping digital identity and the future of secure verification. Discover key updates on the European Digital Identity Wallet, the latest approaches to mobile driver’s license verification, and how deepfake detection is evolving to tackle growing threats. Plus, we’ll explore Jumio’s innovative biometric liveness detection and its role in combating identity fraud.

Here’s a closer look at what you’ll find in this month’s insights:

The EUDI Wallet

The European Digital Identity (EUDI) Wallet is a transformative initiative designed to provide EU citizens with a secure, self-sovereign digital identity solution. This wallet enables individuals to manage their identity information independently, ensuring privacy and security in online interactions. By facilitating access to essential services such as banking, healthcare, and governmental applications, the EUDI Wallet promises to streamline daily life and foster a more inclusive digital economy.

However, its implementation faces several significant challenges. Firstly, robust cybersecurity measures are crucial to prevent identity theft and data breaches, which could undermine user trust. Secondly, achieving regulatory harmonization across diverse EU member states is essential, as different countries have unique legal frameworks and privacy regulations. Without a unified approach, the wallet’s effectiveness could be compromised, leading to confusion among users and service providers alike.

Collaboration among various stakeholders—including government agencies, technology providers, and civil society—will be pivotal in overcoming these hurdles. By establishing clear standards for security, interoperability, and user experience, the EUDI Wallet can become a reliable tool that empowers citizens while safeguarding their data. Ultimately, its success will hinge on building public trust and ensuring that the system is user-friendly, accessible, and compliant with the highest privacy standards.

mDL Verification

The emergence of mobile driver’s licenses (mDLs) signifies a significant shift in identity verification, presenting both opportunities and challenges for users and authorities. mDLs offer a modern alternative to traditional physical licenses, allowing users to carry their identification securely on their smartphones. This technological advancement aims to streamline the verification process for a wide range of services, from travel to online transactions.

However, the implementation of mDLs is fraught with challenges. One primary concern is the need for secure and intuitive verification processes that maintain user confidence while preventing identity fraud. Additionally, the lack of uniformity in regulations across different states poses a significant barrier to widespread adoption. Each state has its own legal standards and technical requirements, complicating interoperability and making it difficult for users to rely on their mDLs outside their home jurisdictions.

Privacy is another critical issue, as users must be assured that their personal information will remain secure and confidential. Striking a balance between robust security measures and a seamless user experience is essential for gaining public acceptance.

To facilitate the successful rollout of mDLs, continuous collaboration among stakeholders — including government agencies, technology developers, and consumers — is vital. By establishing best practices and regulatory frameworks, stakeholders can ensure that mDLs become a trusted and widely accepted form of identification, paving the way for a more secure and efficient digital identity landscape.

Deepfake Detection

The rise of deepfake technology presents significant challenges to the authenticity of digital content, raising concerns about misinformation and trust in media. Deepfakes utilize advanced artificial intelligence to create highly realistic but fabricated videos and audio, making it increasingly difficult for viewers to discern truth from deception. As this technology becomes more sophisticated, the need for effective detection methods becomes paramount.

Various techniques are being developed to identify deepfakes, focusing on detecting subtle inconsistencies that can indicate manipulation. These methods include analyzing facial movements, lighting discrepancies, and unnatural expressions, which may signal that a video has been altered. As detection technologies evolve, they must continuously adapt to keep pace with advances in deepfake creation.

A multi-pronged approach is essential to mitigate the risks associated with deepfakes. This involves not only developing robust technological solutions but also enhancing public awareness about the existence and implications of deepfakes. Educating consumers on how to recognize manipulated content is critical in fostering a more discerning audience that can critically evaluate the media they consume.

Regulatory measures also play a crucial role in addressing the challenges posed by deepfakes. Policymakers must consider ethical guidelines and legal frameworks that govern the creation and dissemination of synthetic media. By promoting transparency and accountability in digital content creation, society can better safeguard against the potential harms of deepfakes while preserving the integrity of visual communication.

Jumio’s Biometric Liveness Detection

Jumio’s development of in-house biometric liveness detection technology represents a significant leap forward in identity verification. As identity fraud becomes more prevalent, this innovative solution uses sophisticated artificial intelligence to accurately differentiate between genuine biometric data—such as facial recognition—and spoofing attempts, including photographs or masks. This capability is crucial for enhancing security in online transactions and customer onboarding processes.

The liveness detection technology analyzes various data points in real time, assessing factors such as facial movements and eye interactions to determine whether the biometric input is from a live person. By integrating this technology into their identity verification offerings, Jumio aims to provide organizations with a more reliable means of preventing identity theft and fraud.

Moreover, as the digital landscape evolves, the demand for effective biometric verification solutions is increasing. Organizations must navigate the dual challenge of enhancing security while ensuring a smooth user experience. Jumio’s biometric liveness detection addresses this need, positioning the company as a leader in the identity verification market.

In an environment where digital interactions are ubiquitous, establishing trust in online transactions is paramount. By offering advanced biometric solutions, Jumio is helping to build confidence among consumers and organizations alike, making it easier to engage in secure digital commerce. As identity verification technologies continue to evolve, Jumio’s innovations will play a vital role in shaping the future of secure online interactions.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Together, we can contribute to a more secure and inclusive digital future.

This Month in Digital Identity — November Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

How to Reduce Cart Abandonment and Cultivate Customer Loyalty

Learn how identity can help retailers reduce cart abandonment, increase conversions, grow CLV, and turn casual browsers into brand advocates.

Thursday, 31. October 2024

auth0

Security Considerations in the Time of AI for Startups

Perspectives of a Builder, a Buyer, and a Founder
Perspectives of a Builder, a Buyer, and a Founder

Indicio

Indicio co-developed Digital Farm Wallet wins Constellation Research SuperNova Award

The post Indicio co-developed Digital Farm Wallet wins Constellation Research SuperNova Award appeared first on Indicio.
Trust Alliance New Zealand’s Digital Farm Wallet, co-developed by Indicio and Anonyme Labs, wins “Digital Safety, Governance, Privacy, and Cybersecurity” award. This is the second time Indicio’s Verifiable Credential technology has won a SuperNova Award.

Trust Alliance New Zealand (TANZ), a non-profit industry consortium representing farmers and other agriculture-chain stakeholders, has won one of the most prestigious awards in technology, a Constellation SuperNova Award. The awards recognize business transformation, and TANZ won in the category of Digital Safety, Governance, Privacy, and Cybersecurity.

TANZ’s Digital Farm Wallet uses Verifiable Credentials to create a data-sharing ecosystem for producers, growers, exporters, retailers & consumers. This allows farmers to hold data around emissions, land, water usage, and be able to share it directly and securely with reliant parties, removing the need for large databases or the need to manually keep sending the same information over and over again.

This project and technology is transformational for anyone working in agriculture. The ability to directly share authenticated information between farmers and distributors in a secure, privacy-preserving way realized significant cost benefits in the trial. Farmers spent less time on form-filling and data management and more time farming. In the future, it represents a more streamlined, transparent supply chain, one where farmers have the ability to not only promise organic or green growing practices, but where the consumers can verify the claims for themselves using credentials at the time of purchase.

If you would like to learn more about the project and see the technology in action you can watch an in-depth video here.

This is the second time an Indicio customer has won a SuperNova Award. The first was in 2022 for technology that went on to support the launch of Digital Travel Credentials.

“Digital Agriculture is ripe for revolution using decentralized identity. We’ve shown that the Digital Farm Wallet delivers real, tangible economic benefits to farmers, and with our technology and the depth of its capabilities and features, we see Verifiable Credentials as being a game-changer in the global agricultural value chain,”

said Indicio CEO Heather Dahl. “Land, sea, air, public sector and private sector — there is nowhere our solutions aren’t driving digital transformation, and that’s because our technology makes sharing information smoother, faster, and more secure,”

To learn more about the Indicio Digital Farm Wallet and its features, get in touch with our team of experts and we’d be happy to discuss your specific needs

To learn more about Indicio’s platform for decentralized identity solutions, see our page on Indicio Proven, or read the Beginner’s Guide to Decentralized Identity.

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Indicio co-developed Digital Farm Wallet wins Constellation Research SuperNova Award appeared first on Indicio.


KuppingerCole

Nov 26, 2024: 2024 PAM Market Insights & Vendor Analysis

Join us for a comprehensive webinar on the 2024 Leadership Compass for Privileged Access Management (PAM), where we’ll unpack the latest insights and vendor evaluations shaping the PAM landscape. Discover which vendors lead the market in innovation, product strength, and scalability, and explore emerging capabilities like Just-in-Time (JIT) access and Cloud Infrastructure Entitlement Management (CI
Join us for a comprehensive webinar on the 2024 Leadership Compass for Privileged Access Management (PAM), where we’ll unpack the latest insights and vendor evaluations shaping the PAM landscape. Discover which vendors lead the market in innovation, product strength, and scalability, and explore emerging capabilities like Just-in-Time (JIT) access and Cloud Infrastructure Entitlement Management (CIEM). Gain a deeper understanding of how PAM solutions can secure critical assets across multi-cloud and on-premises environments and learn best practices for selecting a solution that aligns with your organization’s security and compliance needs.

From Stress to Security: Building a Focused, Resilient Workforce

by Jasmine Eskenzi In today’s fast-paced digital landscape, distractions and cognitive overload are some of the primary reasons people fall victim to cyber threats like phishing and social engineering attacks. With multitasking and constant digital connectivity, employees are more susceptible to these tactics, exposing organizations to increased cybersecurity risks. Jasmine Eskenzi, Co-Founder an

by Jasmine Eskenzi

In today’s fast-paced digital landscape, distractions and cognitive overload are some of the primary reasons people fall victim to cyber threats like phishing and social engineering attacks. With multitasking and constant digital connectivity, employees are more susceptible to these tactics, exposing organizations to increased cybersecurity risks. Jasmine Eskenzi, Co-Founder and CEO of The Zensory, will address this critical issue at cyberevolution 2024, where she’ll discuss how mental clarity and mindfulness practices can become essential parts of a robust cybersecurity strategy.

In her session, Jasmine will present research highlighting the connection between everyday stressors and increased vulnerability to cyberattacks. She will also share how organizations can integrate mindfulness and focus-building exercises into their security programs, helping employees remain attentive and resilient against cyber threats. Attendees will learn actionable steps to foster a cybersecurity culture that empowers individuals to stay alert, mentally clear, and proactive in identifying risks, while building a supportive, people-first approach to digital security.

Watch our interview with Jasmine to get a glimpse of her unique insights on blending mindfulness with cybersecurity, and how this approach can address both human vulnerabilities and technical challenges in today’s complex threat landscape.


Ocean Protocol

DF113 Completes and DF114 Launches

Predictoor DF113 rewards available. DF114 runs Oct 31 — Nov 7th, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 113 (DF113) has completed. DF114 is live today, Oct 31. It concludes on November 7th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&nbs
Predictoor DF113 rewards available. DF114 runs Oct 31 — Nov 7th, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 113 (DF113) has completed.

DF114 is live today, Oct 31. It concludes on November 7th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF113 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF114

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF113 Completes and DF114 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 30. October 2024

Northern Block

Northern Block Pilots Trust Registry with IATA and Air Travel Partners

Northern Block Pilots Trust Registry with IATA and Air Travel Partners in Fully Digital Air Travel Experience The post Northern Block Pilots Trust Registry with IATA and Air Travel Partners appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Northern Block Pilots Trust Registry with IATA and Air Travel Partners appeared first on Northern Block | Self Sovereig


[30 October 2024, Bangkok]

Northern Block, in collaboration with the International Air Transport Association (IATA) and key industry partners, has demonstrated that the future of air travel is digital. In a groundbreaking proof-of-concept (PoC), two passengers completed a fully digital, round-trip journey between Hong Kong and Tokyo on October 21 and 22, using digital wallets and travel credentials to navigate airport processes seamlessly.

We’ve achieved something unprecedented here, extending beyond technical trust to build an ecosystem that is secure, reliable, and user-friendly at every stage,” said Mathieu Glaude, CEO of Northern Block.

This PoC demonstrated an unparalleled level of interoperability, bringing together over five technology solution providers and supporting more than five distinct types of digital credentials. Two different digital wallets offered an excellent user experience, while the inclusion of a trust registry provided assurance that each credential came from an authorized issuer, reinforcing trust across the ecosystem.

Highlighting Northern Block’s Role in Credential Verification and Trust Registries

Northern Block played a pivotal role by providing Cathay Pacific with its Orbit Enterprise Credentialing API for credential verification, along with an Orbit Trust Registry instance that allowed verifiers to confirm issuer authority, ensuring credential legitimacy throughout the passenger journey. Trust registries are foundational in communicating credential issuer conformance against an ecosystem governance framework, enhancing the integrity of each interaction. By verifying that every credential originates from an authorized issuer, the trust registry establishes a layer of trust that goes beyond technical requirements, embedding governance principles at every step.

Standards-Based Interoperability: Key to a Seamless Experience

This PoC demonstrated technical standards critical to enabling credential exchange and trust registry functionality. By aligning with IATA’s Technical Interoperability Profile, Northern Block and other vendors adhered to credential exchange protocols like OpenID for Verifiable Credentials, credential formats such as SD-JWT VC, Decentralized Identifiers (DIDs) such as did:web, and more. This alignment ensured that diverse solutions could operate seamlessly in a live, international airport setting.

For the trust registry, the Trust over IP’s Trust Registry Query Protocol (TRQP) allowed verifiers to confirm issuer statuses quickly and reliably, supporting real-time decision-making and building confidence across the open ecosystem.

Highlights of the Proof-of-Concept

Digital Identity and Biometrics: A Fully Digital Travel Experience
Biometric verification paired with digital credentials allowed travelers to navigate airport processes, including check-in and boarding, without presenting physical documents. Credential Verification and Interoperability
Northern Block’s Credential Exchange API enabled Cathay Pacific to verify credentials, and industry-standard VCs, such as boarding passes and visa credentials, were integrated seamlessly using multiple vendor solutions. Trusted Issuer Verification with Trust Registry
The trust registry verified each credential’s issuer authority, ensuring the integrity and trustworthiness of credentials from multiple issuers. By leveraging standards, this PoC demonstrates that digital credentials can be trusted and accepted across jurisdictions. IATA’s Open API Hub: A Gateway for Digital Travel
Now accessible in IATA’s Open API Hub, Northern Block’s Credential Exchange API offers air travel stakeholders the infrastructure to support digital credentials and enhance the travel experience.

Moving Forward

As we see more adoption of digital credentials, trust registries are becoming a foundational trust establishment infrastructure. They will help to enhance the travel journey, and enable value creation in retailing, service delivery, and across the whole partner value chain. We invite you to join us in future projects as we continue to push towards digital transformation of the airline industry together.

 

The post Northern Block Pilots Trust Registry with IATA and Air Travel Partners appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Northern Block Pilots Trust Registry with IATA and Air Travel Partners appeared first on Northern Block | Self Sovereign Identity Solution Provider.


auth0

Streamline Account Provisioning and Management with SCIM

Automate user lifecycle, enhance security and boost IT efficiency across your enterprise
Automate user lifecycle, enhance security and boost IT efficiency across your enterprise

TBD on Dev.to

How Decentralized Apps Can Make Everyday Tasks Easy

Whenever I explore technology that's new to me, whether it be learning how decentralized apps work or what an open source tool does, it's seeing the technology in action that helps me understand whether or not it can impact me and my every day life. This is why every month at TBD, what better way to learn about our technology in action than from innovators using TBD's te





Whenever I explore technology that's new to me, whether it be learning how decentralized apps work or what an open source tool does, it's seeing the technology in action that helps me understand whether or not it can impact me and my every day life. This is why every month at TBD, what better way to learn about our technology in action than from innovators using TBD's technologies today.

As our open source projects continue to develop, our community members contribute to the global effort of decentralizing the web with their independent projects. Here are the latest contributions from them.

Ariton

Developed by Sondre Bjellås (@sondreb), Ariton is a Web5 community SuperApp. It acts as a decentralized platform for building and managing communities! Ariton runs on any device with the ability to add any Mini Apps (or features) you want, like chat, groups, events, notes and more. Built on free and open standards, your identity and data is always in your full control. Currently in prototype stage, you can learn more and try it out here.

Kin AI

Kin AI is a personalized Web5 AI companion that offers guidance, coaching, and emotional support! Kin helps you piece together your problems and how to solve them in a way that seamlessly fits how you want it to. All your data stays on your device, and no one can access it without your specific permission. Live in beta, you can get early access on the App or Play store.

BlockCore Wallet

Also developed by Sondre (mentioned above), BlockCore Wallet is a non-custodial Web5 wallet in your browser that supports DIDs (decentralized identifiers), tokens, crypto currencies and more! You can add different accounts, send/receive payments, and even use an address book to quickly send multiple payments to one contact. You can learn more and try it out yourself in the BlockCore Wallet Guide.

Share Your Open Source Project

Amazing projects, right? Really helps visualize how decentralized apps can bring ownership and value to your everyday life in ways you may not have imagined.

Have a cool open source project that incorporates TBD's decentralized technologies? We'd love to hear about it! Head over and share your work with us in Discord in our #share-what-you-do channel for a chance to have your project featured on our dev site.


Elliptic

Crypto regulatory affairs: Hong Kong plans to create panel for licensed crypto exchanges to facilitate regulatory consultation

Regulators and policymakers in Hong Kong have offered further indication of forthcoming initiatives and priorities that could help to solidify Hong Kong’s status as the leading cryptoasset hub in the Asia-Pacific region. 

Regulators and policymakers in Hong Kong have offered further indication of forthcoming initiatives and priorities that could help to solidify Hong Kong’s status as the leading cryptoasset hub in the Asia-Pacific region. 


KuppingerCole

Enabling Smart Business Processes: Orchestrating Digital Identities & Signing

by Martin Kuppinger Identity orchestration enables organizations to build flexible, adaptive user journeys that can adapt to the ever-changing requirements of modern organizations. Many use cases require advanced capabilities such as integrated identity verification, document verification, or qualified electronic signing to serve the security, risk management, and regulatory requirements of organi

by Martin Kuppinger

Identity orchestration enables organizations to build flexible, adaptive user journeys that can adapt to the ever-changing requirements of modern organizations. Many use cases require advanced capabilities such as integrated identity verification, document verification, or qualified electronic signing to serve the security, risk management, and regulatory requirements of organizations. This whitepaper provides insight into what organizations should look for when selecting solutions for modernizing their user journeys. It also puts a spotlight on the Xayone platform as one solution serving this market.

HP Wolf Pro Security

by John Tolbert This KuppingerCole Executive View report looks at the field of Unified Endpoint Management, threats to endpoints, and need for PC management and security. A technical review of HP Wolf Pro Security is included.

by John Tolbert

This KuppingerCole Executive View report looks at the field of Unified Endpoint Management, threats to endpoints, and need for PC management and security. A technical review of HP Wolf Pro Security is included.

Tuesday, 29. October 2024

TBD on Dev.to

Why Broken Links Are Costing You Brand Deals (And How to Fix It)

Have you ever watched a creator’s video and thought, "Where did she get that top?" or "I need that protein powder"? You scroll through the comments, only to see the infamous "link in my bio" comment. You rush to click the link, and you're hit with-page not found 😒. I remember once being so desperate that I took a screenshot of the item and reverse-searched it on Google Images. I found something si

Have you ever watched a creator’s video and thought, "Where did she get that top?" or "I need that protein powder"? You scroll through the comments, only to see the infamous "link in my bio" comment. You rush to click the link, and you're hit with-page not found 😒. I remember once being so desperate that I took a screenshot of the item and reverse-searched it on Google Images. I found something similar but not what I wanted. SO frustrating. Eventually, I gave up and kept on scrolling.

Now, imagine how many potential sales that creator lost because a third-party platform’s server was down. Their metrics won't even reflect those missed opportunities, making it harder to secure brand deals. Who actually has time for that? That’s when I realized I could use Decentralized Identifiers (DIDs) to create my own decentralized link hub utilizing service endpoints. With this setup, all my links and contact info are stored in one place—owned and controlled by me. Even if a service that houses all my links goes down, my links will always be accessible because they’re not reliant on any external platforms to display them. I’m sharing this in hopes that fellow creators won’t miss out on potential brand deals, and I won't have to cry over a top I never got to buy.

Before I show you exactly how you can create your own decentralized link hub, lets answer some of the questions you're probably asking yourself.

What are Decentralized Identifiers (DIDs)?

So, what exactly is a Decentralized Identifier, or DID? Think of it as your username—the one source of truth for everything you do online—except this one is owned and controlled entirely by you. It’s a unique "address", thats verifiable and doesn’t rely on any central authority like Facebook, Google, or any other service. Instead, DIDs give you the freedom to manage your own identity online, without needing to trust a single platform to store or validate your information.

In the context of a decentralized link hub, your DID becomes the hub for all your important links. It’s not tied to any third-party service, which means you never have to worry about followers scrolling simply because your link page isn't working. When you update your links, you only need to do it once, as they're tied to your DID—so they stay consistent across all your social platforms, giving you full control. When you update your links, they stay up-to-date across the web because again they’re tied to your DID—giving you full control.

How are Service Endpoints going to help me?

Now, let’s cover what service endpoints are. These might sound technical, but they’re actually pretty simple—think of them like your digital address/phone book. Remember those huge yellow books you used to sit on at the hair salon? They were filled with phone numbers and addresses, making it easy to find and contact people. Well, service endpoints are kind of like that, except they’re the digital "addresses" for different parts of your online identity. These could be links to your Instagram profile, website, direct messages, or even your affiliate links.

These endpoints live in your DID document. So instead of relying on centralized services like Linktree, your DID acts as the home for all your important links. So when someone resolves your DID, they can access the service endpoints that you’ve decided to share.

You can also easily update and delete these links anytime you need to again without relying on any third-party platform to keep those connections working.

The fix: let's create a decentralized Link Hub

If you’re more of a visual learner, check out my YouTube short where I show you exactly how. For this example we're going to create a DID with two service endpoints. One pointing to my LinkedIn and the other pointing to my X profile.

Step 1: Import web5/dids package

import {DidDht} from '@web5/dids'

Step 2: Create DID with service endpoints

const myBearerDid = await DidDht.create({ options:{ publish: true, services: [ { id: 'LinkedIn', type: 'professional', serviceEndpoint: 'https://www.linkedin.com/in/ebonylouis' }, { id: 'X', type: 'personal', serviceEndpoint: 'https://x.com/EbonyJLouis' } ] } });

Now that we've created your DID with service endpoints leading to your LinkedIn and X profiles.

Step 3: Lets print our entire DID also know as a BearerDid to see our DID document where these service endpoints can be found:

console.log(myBearerDid)

It is important to never share your full BearerDID, it contains private keys that only you should have access to. The holder of these keys can perform private key operations, like signing data. Check out this Key Management Guide to learn how to properly manage your DID keys.

Output:

my bearerDid BearerDid { uri: 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey', document: { id: 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey', verificationMethod: [ [Object] ], authentication: [ 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0' ], assertionMethod: [ 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0' ], capabilityDelegation: [ 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0' ], capabilityInvocation: [ 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0' ], service: [ [Object], [Object] ] }, metadata: { published: true, versionId: '1729705713' }, keyManager: LocalKeyManager { _algorithmInstances: Map(1) { [class EdDsaAlgorithm extends CryptoAlgorithm] => EdDsaAlgorithm {} }, _keyStore: MemoryStore { store: [Map] } } }

This output contains your DID string(uri) thats your "username" along with the services array and some authentication and verification methods. To learn more refer to this DID Document Guide.

Step 4: Now lets look closely at just our serviceEndpoint array:

console.log("personal link hub", myBearerDid.document.service || "No Services Found");

Output:

decentralized link hub [ { id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn', type: 'professional', serviceEndpoint: 'https://www.linkedin.com/in/ebonylouis' }, { id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#X', type: 'personal', serviceEndpoint: 'https://x.com/EbonyJLouis' } ] How do I share these links?

Now that your DID is in your bio, how do your followers access your links? It's simple- they just need to resolve your DID to see a full list of your shared links:

The resolving of your DID will differ depending on the DID method used to create the DID. In this example we are using the DHT DID method:

// DID in your bio const didDhtUri = 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y' // resolve the DID const resolvedDhtDid = await DidDht.resolve(didDhtUri); // access the DID Document's service links const dhtDidDocument = resolvedDhtDid.didDocument.service; console.log(dhtDidDocument)

Output:

[ { id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn#LinkedIn', type: 'professional', serviceEndpoint: [ 'https://www.linkedin.com/in/ebonylouis' ] }, { id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn#X', type: 'personal', serviceEndpoint: [ 'https://x.com/EbonyJLouis' ] } ]

As you can see, we’ve succesfully set up our service endpoints to point to both my LinkedIn and X accounts. Now it’s your turn to secure the bag, create your own decentralized Link hub! And if you tweet about it, don’t forget to tag me.

To learn more about Decentralized Identity check out TBD's Docs.


SelfKey

SelfKey Announces Community Vote on Proposed Merger With SingularityDAO and Cogito Finance to Form Singularity Finance

Kingstown, Saint Vincent and the Grenadines, October 29th, 2024 - Decentralised identity platform SelfKey has announced a community vote to determine its proposed merger with SingularityDAO (SDAO) and Cogito Finance. If approved, SelfKey will merge to become Singularity Finance, a Layer 2 for tokenizing AI assets.

Kingstown, Saint Vincent and the Grenadines, October 29th, 2024 - Decentralised identity platform SelfKey has announced a community vote to determine its proposed merger with SingularityDAO (SDAO) and Cogito Finance. If approved, SelfKey will merge to become Singularity Finance, a Layer 2 for tokenizing AI assets.


auth0

How to Choose the Right Authorization Model for Your Multi-Tenant SaaS Application

Explore how tools like Auth0 Organizations, OPA, and Okta FGA can streamline your authorization strategy for secure, scalable SaaS development.
Explore how tools like Auth0 Organizations, OPA, and Okta FGA can streamline your authorization strategy for secure, scalable SaaS development.

Indicio

A landmark in digital travel — Aruba, Indicio and SITA combine a DTC and IATA OneID for international flights

The post A landmark in digital travel — Aruba, Indicio and SITA combine a DTC and IATA OneID for international flights appeared first on Indicio.

Thirty minutes from plane to beach. That’s the stress-free vision Aruba had for its tourist-driven economy. Now, thanks to a collaboration with SITA, a global leader in air travel IT, and Indicio, a global leader in decentralized identity, that vision is a reality.

In a presentation at the forthcoming IATA World Financial Symposium (WFS) and World Passenger Symposium (WPS) in Bangkok, Michael Zureik, Senior Digital Identity Architect from SITA, will explain how travelers flying from Atlanta to Aruba used a Digital Travel Credential (DTC) developed by SITA, the government of Aruba, and Indicio, and incorporated it with the International Air Transport Association (IATA) OneID, in collaboration with Delta Air Lines.

The combination enabled travelers to the Caribbean island to get preauthorization for travel before flying (using the DTC),  streamline their check-in, baggage drop, and boarding at Hartsfield-Jackson airport in Atlanta (using IATA One ID), and then cross the border in seconds upon arrival in Aruba (using the DTC). They then boarded a tour bus and were on the beach within the 30-minute time frame.

The result is transformational for international travel. It heralds the arrival of seamless digital travel, where travelers get to hold their data on their mobile devices and present it for instant cryptographic verification to prove who they are. This streamlines the journey from booking to arrival, reduces waiting times, especially at border crossings, while providing data privacy for the traveler and better security for airlines, airports, and governments.

The trial is the first of its kind to merge the two leading decentralized digital travel identities into one workflow. Because the DTC is an authenticated digital version of a verified passport and is bound to its rightful owner through liveness and biometric checks, governments can trust it for travel authorization and border crossing. While IATA’s One ID uses Verifiable Credential technology to streamline airport processes, it is not a digital representation of a passport and can’t be used to cross a border.

Indicio developed the Atlanta-Aruba PoC and the software to create, hold, and verify both the DTC and OneID Verifiable Credentials, demonstrating the company’s expertise in developing Verifiable Credential and decentralized identity technology and solutions.

“As a company driving seamless digital transformation using Verifiable Credentials, we were tremendously excited to bring SITA and IATA together in Aruba,” said Heather Dahl, CEO of Indicio. “In addition to developing and implementing the world’s first Digital Travel Credential for SITA, we had the privilege of taking IATA’s vision for One ID and making it reality. And then we combined both in Aruba’s ground-breaking travel app, AHOP.

Jeremy Springall, SVP of Borders at SITA, added, “This collaboration represents a significant step forward in redefining travel experiences. By using the power of digital credentials, we are not only enhancing efficiency but also prioritizing traveler privacy and security.”

“This is a showcase for Indicio’s technical expertise in decentralized identity,” said Dahl. “There is nothing we can’t do when it comes to designing and optimizing seamless authentication. But this trial was also an important market signal. In a world that is rapidly embracing digital identities and credentials for all kinds of data sharing, there will be many different solutions, some with more and better features than others. You never need to be limited by these choices. Because we build on interoperable standards, we can wrap credentials together and create workflows that deliver the best possible performance and user experience, easily adopt new features, and keep driving market innovation.”

SITA’s Zureik will present on the Atlanta-Aruba trial in World Ballroom B on Thursday, October 31, 2024 at 3:15pm.

For more information about Digital Travel Credentials, combining them with One ID, or other uses for verifiable credentials, visit Indicio.tech or contact us directly.

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post A landmark in digital travel — Aruba, Indicio and SITA combine a DTC and IATA OneID for international flights appeared first on Indicio.


Lockstep

Hello, I’m a zombie. What brings you here today?

Some people think we’re ready to get psychological counselling from AIs. But in a tragedy reported by Kevin Roose of the New York Times, a troubled teenager committed suicide after apparently falling in love with a chat bot. I have listened to Kevin Roose’s account of the suicide case on the NYT Hard Fork podcast.... The post Hello, I’m a zombie. What brings you here today? appeared first on Loc

Some people think we’re ready to get psychological counselling from AIs. But in a tragedy reported by Kevin Roose of the New York Times, a troubled teenager committed suicide after apparently falling in love with a chat bot.

I have listened to Kevin Roose’s account of the suicide case on the NYT Hard Fork podcast. Honestly, I can’t bring myself to repeat the details, even in summary.

Here, I just want to ask if AI today is fit for purpose as a counsellor.

We’re putting zombies behind the wheel

If you know anything about the Large Language Models that power AI today, you will appreciate they are zombies. They are amazingly adept at natural language, and give every impression of comprehension, nuance, maybe even some sentience. But they are utterly hollow inside.

LLMs are not designed to model minds or indeed any aspect of the real world. These so-called “language” models don’t even understand spelling sufficiently to be able to count (‘in their mind’s eye’ as humans can) the number of Rs in “strawberry”.

And yet this brand-new software is being packaged into life-like chat bots and promoted by some for psychological therapy.

The company Character.AI lets developers customise chat bots with different personas and then hosts them. One such bot is dubbed Psychologist and described literally as “someone who helps with life difficulties”; it opens each conversation with “Hello, I’m a Psychologist. What brings you here today?”.

In tiny font beneath the dialog box, the user is cautioned “Remember: Everything Characters say is made up!”. However, the banner saying this bot can help with life difficulties is displayed outside the dialog box and would therefore appear to be a claim made by the company, not the bot.

How do we think AIs think?

LLMs are research tools. They capture the statistics of text and speech through training on vast files of natural language, and they then generate sentences in a given context which replicate those stats.

It’s really just a cool side-effect that these models can calculate plausible “answers” in response to “questions” and string sentences together to form conversations or essays.

The shudder quotes are deliberate. When we humans hear a question, we can usually figure out the reasons and interests that lie behind it and use that context to inform how we engage with the other person. But a chat bot is only coming up with sequences of words that it predicts will be appropriate, based on billions of prior examples.

No chat bot cares what you’re interested in, for caring is light years away from what LLMs were designed to do.

LLMs can display distinct attitude; indeed, the models can be prompted to adopt a certain style or manner). But any personality we might be tempted to see in a chat bot — as with the content it generates — is just the result of replicating the statistical properties of a subset of the training data. No chat bot can know that, for example, it is a member of a demographic or a tribe when it’s prompted to answer in the manner of an angry teenager or a Liverpool supporter.

Artificial intelligences today do not reflect internally on the things they do.  As such, they don’t think as we think, or as we might think they think.

A thought experiment

Imagine this.

A start-up business launches a self-help program for children, where total strangers are made available to sit in rooms alone with the kids and talk with them for hours on end, with the express purpose of forming ongoing relationships.

These potential new pals will have no family experience of their own. They will not have been formally schooled but instead are entirely self-taught on text from the Internet. And they occasionally hallucinate.

The supplier of the companions is aware of the hallucinations, but no one can explain them. In fact, company officials state flatly they don’t really understand any of the more complex behaviours of the companions.

On the plus side, they’re super-intelligent; they can ace medical and law school entrance exams. Cool. And any child can have them, 24×7, for free.

But I probably lost you at “strangers”.

The post Hello, I’m a zombie. What brings you here today? appeared first on Lockstep.


PingTalk

Introducing Helix: The Intersection of AI and IAM

Helix is Ping Identity’s new strategic initiative that brings AI and IAM together to create a more secure, efficient, and dynamic digital environment.  

In the evolving digital landscape, the interplay between artificial intelligence (AI) and identity & access management (IAM) will become increasingly critical. As AI continues to shape the future of technology, its role in identity management will expand beyond enhancing security and user experience; conversely, robust identity systems will be essential to the success of AI itself. 

 

Generative AI brings with it amazing opportunities for innovation, automation, and improvements to operational efficiency. However, the security risks inherent in its use are significant if the AI is not appropriately authenticated, authorized, and governed. Authentication and authorization of human users will not be enough going forward – we must also ensure that AI agents are correctly authenticated and authorized. These agents will need to interact with and assist their human counterparts, all within a framework that guarantees lawful, transparent, and efficient operations. 

 

Ping Identity’s vision embraces this dual responsibility: not only harnessing AI to advance identity solutions but also evolving identity frameworks to securely manage the identities and operations of AI agents.

 

PingHelix is Ping Identity’s new strategic initiative that seeks to embed AI at the core of the Ping Identity Platform in a secure and responsible way, paving the way for a future where AI and IAM are inseparable and work together to create a more secure, efficient, and dynamic digital environment.


Aergo

Aergo Successfully Implements V4 Hard Fork: A Major Step Forward for Enterprise Blockchain…

Aergo Successfully Implements V4 Hard Fork: A Major Step Forward for Enterprise Blockchain Innovation Aergo has successfully implemented its V4 hard fork, an exciting milestone for the blockchain world. The hard fork introduces improvements to enhance performance, scalability, and security. This upgrade results from months of dedicated effort by the Aergo team. It is designed to cater to the grow
Aergo Successfully Implements V4 Hard Fork: A Major Step Forward for Enterprise Blockchain Innovation

Aergo has successfully implemented its V4 hard fork, an exciting milestone for the blockchain world. The hard fork introduces improvements to enhance performance, scalability, and security. This upgrade results from months of dedicated effort by the Aergo team. It is designed to cater to the growing needs of businesses and developers in the rapidly evolving blockchain ecosystem.

Further details about the V4 hard fork will be shared in an upcoming interview with the development team.

Behind the Scenes: A Rigorous Testing Process

The Aergo team conducted extensive testing to ensure the upgrade would be seamless and secure. The testing process began on the testnet, where various use cases were evaluated, including app installation and initialization, wallet functionality, transaction processing, and token operations.

The testing phase was critical in identifying any potential issues before the upgrade was applied to the mainnet. This cautious and methodical approach ensured that all functions operated stably, avoiding the risk of disruption to the network’s users.

In addition, Aergo implemented post-upgrade monitoring processes to track network performance in real time. Client-specific contingencies were also built into the process to ensure potential issues could be resolved quickly and efficiently.

What This Means for the Aergo Ecosystem

The successful V4 hard fork is a significant achievement for the Aergo community. For businesses, it means increased confidence in the platform’s ability to support large-scale, mission-critical operations. For developers, it opens up new possibilities for innovation and application development.

Aergo’s continued focus on scalability and security strengthens its position as a go-to blockchain platform for enterprises looking to harness the power of decentralized technologies. Whether you’re an existing user or someone exploring the platform for the first time, Aergo V4 clarifies that the network is built for growth and innovation.

Looking Ahead: What’s Next for Aergo?

While the V4 hard fork is a significant milestone, it’s just the beginning of what’s to come for Aergo. The team is already planning additional upgrades and features to further enhance the platform’s capabilities. As the blockchain space continues to evolve, Aergo remains committed to staying at the forefront of innovation, ensuring its platform remains adaptable, scalable, and secure for the future.

Conclusion

The Aergo team acknowledges that the release of the V4 hard fork was delayed and is deeply grateful for the community's patience and understanding. The team worked tirelessly behind the scenes to ensure that this upgrade would meet the highest performance, security, and scalability standards, which required extra time for thorough testing and quality assurance. While the delay may have caused some anticipation, the Aergo team believes that the V4 hard fork’s long-term benefits far outweigh the short-term wait.

In the upcoming interview with the development team, we will explore the technical intricacies behind the V4 hard fork and discuss how these enhancements will drive the future growth of the Aergo platform.

Aergo Successfully Implements V4 Hard Fork: A Major Step Forward for Enterprise Blockchain… was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


An Interview with the Dev Team on Aergo’s V4 Hard Fork

1. Background and Goals of the Hard Fork 1) What is the primary goal of the V4 hard fork? The primary goal of this V4 hard fork is to enhance the transparency of smart contracts, enabling future integration with machine learning. This will lead to clearer and more traceable data and contracts on the blockchain, benefiting both current and future applications on the Aergo platform. 2)
1. Background and Goals of the Hard Fork

1) What is the primary goal of the V4 hard fork?

The primary goal of this V4 hard fork is to enhance the transparency of smart contracts, enabling future integration with machine learning. This will lead to clearer and more traceable data and contracts on the blockchain, benefiting both current and future applications on the Aergo platform.

2) What is the most significant technological advancement of V4 compared to the previous versions?

The most significant technological advancement in V4 is introducing a new data model that positions Aergo for applications involving Machine Learning and integration with Small and Large Language Models. This advancement prepares Aergo for future developments in AI and blockchain integration.

2. Technical Improvements and Changes

1) What is the most crucial technical upgrade in the V4 hard fork?

The most crucial technical upgrade in the V4 hard fork is the enhancement of Aergo’s Lua engine, which is used for smart contract development. These improvements to the Lua engine will boost performance, security, and flexibility for developers building on the Aergo blockchain. This upgrade includes the introduction of internal transaction logging and composable transactions, making contracts more transparent.

2) What new features or protocols are being introduced with this hard fork, and what advantages will these provide to users, developers, and enterprise clients?

The V4 hard fork introduces several new features and protocols:

- Composable transactions: Batch transactions and executions together to reduce transaction costs and time for end users. Devs can execute several contracts within one or fewer transactions.

- Text-based contracts/transactions: All Lua contracts on Aergo will be stored as source code instead of bytecode, making them more human-readable and deterministic.

- (Upcoming)Internal transaction logging: This upcoming feature will further improve transparency by providing detailed logs for complex contracts that interact with each other, helping users track activities across interconnected contracts.

These features provide advantages such as improved transparency and usability for users, developers, and enterprise clients. They also lay the groundwork for future developments like DAOs and machine learning applications on the Aergo mainnet.

3) Are there any specific technical requirements that Aergo, focused on the enterprise environment, should prioritize compared to other projects?

Yes, Aergo emphasizes high processing speed and flexibility. For enterprise environments, the usability of smart contracts is critical in building trust and enabling more efficient use of blockchain technology.

3. Risks and Problem-Solving

1) What was the most significant technical challenge in preparing for the V4 hard fork?

The Aergo team faced several critical challenges in preparing for the V4 hard fork, primarily focusing on ensuring compatibility and a smooth transition for existing enterprise customers. Some of the key challenges include:

- Ensuring Backward Compatibility: One of the biggest challenges was to ensure that the new features and changes introduced in V4 don’t break existing functionality or negatively impact current enterprise applications. This is particularly crucial for Aergo, given its focus on enterprise clients.

- Implementing the New Data Model: Introducing a new data model to support Machine Learning and Language Model integration is a significant technological leap. Ensuring this new model works seamlessly with existing blockchain structures without compromising performance or security was a considerable challenge.

- Upgrading the Lua Engine: Enhancing the Lua engine while maintaining its efficiency and ensuring it doesn’t introduce new vulnerabilities was a complex task requiring extensive testing and optimization.

2) Is there a backup plan or rollback process in place if the hard fork is not successfully implemented?

The Aergo team has thoroughly tested various scenarios to ensure the success of the hard fork. These tests include app installation and initialization, wallet functionality, transaction processing, token operations, and more. All testing is conducted on the testnet to ensure stable performance before rolling out changes to the mainnet. Additionally, the team has implemented post-upgrade monitoring processes and prepared client-specific contingency plans.

4. Impact on the Enterprise Environment

1) What impact is this hard fork expected on enterprise clients, and what key benefits can they gain from this upgrade?

Enterprise clients can benefit from the enhanced Lua engine, which makes smart contract development more efficient and secure. New features like composable transactions and text-based contracts will also provide more flexibility in building and managing blockchain applications.

2) What concerns do enterprise clients typically have about large-scale upgrades like hard forks, and how are these concerns addressed?

Enterprise clients often worry about stability and security during large-scale upgrades. To address these concerns, Aergo ensures thorough testing and optimization to guarantee the network’s stability and security during and after the upgrade.

5. Post-V4 Roadmap

After completing the V4 hard fork, what are the following major goals? Are there any additional hard forks or upgrades planned in the long term?

With the introduction of features like text-based transactions and the groundwork for integrating machine learning and language models, we at Aergo are setting the stage for more advanced applications of blockchain technology in the enterprise space.

The future development direction of enterprise blockchain technology after this V4 hard fork is focused on increased functionality, better integration with AI and machine learning technologies, and more human-readable and interpretable blockchain interactions. These advancements align with the growing demand for more accessible and powerful blockchain solutions in the enterprise sector.

An Interview with the Dev Team on Aergo’s V4 Hard Fork was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


SelfKey

Merger Proposal: SelfKey to Team up with SingularityDAO and Cogito Finance to Build the Foundation for the Tokenised AI Economy

Gros Islet, Saint Lucia, 15 October 2024 - SelfKey, SingularityDAO, and Cogito Finance have announced the proposal of a strategic token merger to launch Singularity Finance, an EVM Layer-2 for tokenising the AI economy’s Real World Assets (RWA).

Gros Islet, Saint Lucia, 15 October 2024 - SelfKey, SingularityDAO, and Cogito Finance have announced the proposal of a strategic token merger to launch Singularity Finance, an EVM Layer-2 for tokenising the AI economy’s Real World Assets (RWA).


TBD

How Decentralized Apps Can Make Everyday Tasks Easy

Learn how TBD's Innovator apps using our technology and how they can impact you from today!

Whenever I explore technology that's new to me, whether it be learning how decentralized apps work or what an open source tool does, it's seeing the technology in action that helps me understand whether or not it can impact me and my every day life. This is why every month at TBD, what better way to learn about our technology in action than from innovators using TBD's technologies today.

As our open source projects continue to develop, our community members contribute to the global effort of decentralizing the web with their independent projects. Here are the latest contributions from them.

Ariton

Developed by Sondre Bjellås (@sondreb), Ariton is a Web5 community SuperApp. It acts as a decentralized platform for building and managing communities! Ariton runs on any device with the ability to add any Mini Apps (or features) you want, like chat, groups, events, notes and more. Built on free and open standards, your identity and data is always in your full control. Currently in prototype stage, you can learn more and try it out here.

Kin AI

Kin AI is a personalized Web5 AI companion that offers guidance, coaching, and emotional support! Kin helps you piece together your problems and how to solve them in a way that seamlessly fits how you want it to. All your data stays on your device, and no one can access it without your specific permission. Live in beta, you can get early access on the App or Play store.

BlockCore Wallet

Also developed by Sondre (mentioned above), BlockCore Wallet is a non-custodial Web5 wallet in your browser that supports DIDs (decentralized identifiers), tokens, crypto currencies and more! You can add different accounts, send/receive payments, and even use an address book to quickly send multiple payments to one contact. You can learn more and try it out yourself in the BlockCore Wallet Guide.

Share Your Open Source Project

Amazing projects, right? Really helps visualize how decentralized apps can bring ownership and value to your everyday life in ways you may not have imagined.

Have a cool open source project that incorporates TBD's decentralized technologies? We'd love to hear about it! Head over and share your work with us in Discord in our #share-what-you-do channel for a chance to have your project featured on our dev site.

Monday, 28. October 2024

Spruce Systems

New Baby, New Headache: How Verifiable Digital Credentials Could Simplify Insurance Enrollment

Discover how verifiable digital credentials could finally bring sanity to the chaos of adding a newborn to your health insurance—turning a paperwork nightmare into a seamless experience for new parents.

Imagine this: you’ve just experienced one of the greatest joys of the human experience (having a new baby) – followed by one of the most bizarre and stressful bits of paperwork rigamarole imaginable. 

After the joyful part of having a new baby comes the boondoggle of trying to add your newborn to your health insurance policy. What should be a routine necessity unfolds into a daunting undertaking demanding Ocean’s 11-tier planning, meticulous timing, laser-focused attention, and flawless execution to survive with coverage intact.

This same conundrum that confronts millions of new parents in the U.S. each year. Conflicting timelines and requirements between hospitals, insurers, and government officials routinely leave parents panicking, adding stress and instability to what is supposed to be the happiest moment of a new parent’s life.

This trap is just one of many strange bureaucratic tangles caused by outdated, slow, paper-based processes. The good news is that it doesn’t have to be this way: verifiable digital credentials can be used to securely digitize sensitive documents, eliminate delays, and close stressful administrative gaps.

“For unlucky parents, it can be *effectively impossible* to assemble the needed paperwork in time to add your newborn to your health insurance.” 

In the U.S., having a child is a “qualifying event” that allows major changes to a health insurance policy. A child’s birth gives parents a 30-day window to add their new family member to their policy. Simple enough! Until you realize ...

To add your newborn to your insurance, you need their birth certificate and social security number. According to the Social Security Administration, the average turnaround time for a new baby’s SSN and card is about two weeks, but it can be up to six! You begin to see the issue.

Health insurers also need a birth certificate for your newborn, and the timeline to deliver that document is even more hair-raising. While a freshly delivered Los Angelino can expect a birth certificate within about ten days, the New York City records agency warns that a newborn’s birth certificate can take four weeks to generate. Furthermore, at every point in this process, there’s the uncertainty of physical mail, presenting the risk that precious documents can be delayed or go missing.

In short, for unlucky parents, it can be *effectively impossible* to assemble the needed paperwork in time to add your newborn to your health insurance. That’s not even considering the possibility of birth complications for the baby or mother, which can distract a family from this extremely dicey enrollment process … at exactly the moment it’s most crucial. Across baby and parenting forums, you can take your pick of panicked stories from worried parents.

The nominal reason for these headaches is pretty straightforward: documents like a birth certificate or a social security card are sensitive, both to verify and to create. But the adoption of verifiable digital credentials can streamline many parts of such processes.

But That’s Not All

The newborn health insurance enrollment scenario is just one example of a paperwork bottleneck that shambles on, a lingering relic of the old, paper-based world. Paper documents are simply slow to process: they take extra time to verify, create, certify, and deliver, compared to digital records. Some time-sensitive documents might even still require physical signatures, placing them at the whims of an individual officer or executive. 

One group that it is particularly burdensome for is immigrants. Legal U.S. immigrants must wait up to 90 days to receive a permanent resident card (or “green card”) after immigrating. During that time, they’re prevented from leaving the country. Upon approval for permanent residency, the applicant’s temporary travel authorization card is invalidated; however, it can take weeks for the physical card (which is required for travel outside of the U.S.) to be mailed to the new U.S. permanent resident, leaving them in a travel-blocked limbo.

For U.S. citizens, a similar gap can open up while waiting to receive a proper driver’s license in the mail: the temporary license you’re issued in the meantime isn’t considered legal identification for many purposes.

The same kind of delays can create headaches for anyone getting married. Entry into wedded bliss opens a short qualifying window for changing things like health insurance plans, but again, the paperwork process can be slow enough to disrupt the transition. And maybe the most absurd example of this sort of trap is qualifying for COBRA benefits, or the continuation of insurance after losing a job. COBRA coverage itself is most crucial in the months immediately following a layoff, but getting actual enrollment documents can take multiple months. That can leave you paying punishing premiums, without getting actual proof of coverage in return … all while unemployed!

The Verifiable Digital Credential Fix

As the old infomercials said, there’s got to be a better way.

And there is. SpruceID is part of a rapidly growing universe of companies, governments, and tech organizations building the tools for trustworthy verifiable digital credentials. These documents aren’t just digital images of documents like driver’s licenses and birth certificates - those would be extremely vulnerable to fraud or theft. Instead, verifiable digital credentials use a system of cryptographic digital signatures. Digitally-signed documents are stored in a special chip on hardware such as your cell phone. Most verifiable digital credentials can or must be backed up by paper copies – so you’d still be able to stick a paper birth certificate in your kid’s scrapbook.

The primary benefit of these cryptographically secured credentials is that they can be securely presented for authentication over the Internet. This means a lot of bureaucratic headaches can be eliminated through the use of these credentials that can be delivered immediately, rather than via snail mail. While there are still verification processes, the creation of a ‘digital birth certificate’ using these tools would be near-instantaneous. With a bit of smart planning, documents could be delivered to devices belonging to the newborn’s parents, then just as quickly provided to insurance administrators. 

In fact, the entire situation could (and should!) be automated, through a mix of policy and technology. Your child should automatically be added to your health insurance policy as soon as they’re born, and with verifiable digital credentials, the necessary paperwork could be sent directly from the hospital to the insurer.

That’s all in the future, and would require a lot of coordination and agreement among players the industry. For now, verifiable digital credentials are steadily rolling out in a few more straightforward realms, such as California’s Mobile Driver’s License, Utah’s Passes and Permits, and now U.S. Passports in Google Wallet.

Imagine a world where vital life events—like adding your newborn to your health insurance—don’t come with an overwhelming dose of paperwork stress. With SpruceID’s expertise in verifiable digital credentials, that reality is closer than you think. Ready to see how digital credentials are transforming government processes and simplifying life’s biggest moments? Visit our website to learn more about how we’re making secure, streamlined solutions possible for everyone.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Northern Block

Northern Block and Credivera Announce Strategic Partnership to Accelerate Adoption of Digital Trust Ecosystems

Northern Block partners with Credivera to accelerate the adoption of workforce credentials across various ecosystems. The post Northern Block and Credivera Announce Strategic Partnership to Accelerate Adoption of Digital Trust Ecosystems appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Northern Block and Credivera Announce Strategic Partnership to Accelera


FOR PUBLIC RELEASE

[Toronto, Calgary, Gatineau, October 28, 2024] – Northern Block, a leading provider of digital credentialing and trust establishment solutions, and Credivera, an established market leader in workforce credentialing, are pleased to announce a global strategic partnership and a combined solution offering for workforce digital ecosystem rollouts. This new offering combines Credivera’s credential platform, the Credivera Exchange, with Northern Block’s Orbit Trust Registry. Deployed together, these products enable open, decentralized workforce ecosystems that operate across organizational boundaries. As workers become increasingly specialized and mobile (i.e., working at more than one location/organization), a decentralized workforce ecosystem that operates across organizational boundaries is emerging as a strategic asset for operational efficiency. Layered into modern economies that are updating their privacy laws and identity systems, there is an opportunity for organizations to boost business agility and start recognizing the benefits of adopting a modern digital trust ecosystem within their operating environment.


Empowering Digital Trust

Both companies are at the forefront of digital trust solutions and understand that digital credentials and trust infrastructure solutions are important parts of the trust equation. To achieve widespread adoption of trusted digital interactions, human trust at the core, strong governance, and robust roots of trust must also be addressed. This complete picture forms the foundation for this strategic partnership.


Standards-Based Trust Infrastructure

Both companies emphasize the need for digital trust solutions based on widely adopted standards.

Credivera Exchange and Orbit Trust Registry support industry standards such as Decentralized Identifiers (DIDs), IETF’s High-Assurance DIDs with DNS, Verifiable Credentials (VCs), OpenID Federation, and the Trust over IP (ToIP) Trust Registry Query Protocol, facilitating interoperability, security, and compliance with the latest digital trust protocols.


Combining Strengths for Best-in-Class Solutions

This partnership brings together the best of both worlds. Credivera has built a highly successful platform for workforce credentials, while Northern Block already offers an advanced, proven trust registry solution that supports any ecosystem in delivering value to its members.

“We are witnessing rapid adoption of workplace digital credentials across many markets. At Northern Block, we’re excited to partner with Credivera, a leader in this space, to bring the power of trust registries to a broader audience,” said Mathieu Glaude, Founder & CEO at Northern Block. “This partnership will extend trust benefits to a broader range of stakeholders helping organizations digitally enable their operations with confidence.”

This collaboration promises to elevate the standard of digital trust, ensuring a brighter, more secure future for all stakeholders.

“We recognize that certain ecosystems can greatly benefit from having a trust registry in their digital trust strategy. We are pleased to partner with Northern Block to offer alongside our platform the most advanced trust registry service.” said Dan Giurescu, Founder and CEO at Credivera.


About Northern Block

Northern Block is a global leader in implementing digital trust technologies based on open standards, technologies, and trust frameworks. They collaborate with numerous global governments, sustainability credential ecosystems, travel ecosystems, and internet trust providers to equip them with the necessary toolkits to achieve both technical and human trust. Northern Block was founded in Toronto in 2017, with a presence in Gatineau and Amsterdam. Find out more at northernblock.io.


About Credivera

Credivera pioneered the world’s first secure, open exchange for verifiable credentials, known as the Credivera Exchange. As a leader in workforce management and digital identity, Credivera empowers employees, employers, and organizations that issue credentials by increasing productivity and control over how important credentials are stored and shared. The Credivera Exchange optimizes personal privacy and trust with up-to-date, verifiable credentials secured in a digital wallet, reducing risk for all parties involved. Credivera was founded in Calgary, Alberta, in 2018, with a presence in Toronto and Gatineau. Find out more at Credivera.ca.

————————————————

The post Northern Block and Credivera Announce Strategic Partnership to Accelerate Adoption of Digital Trust Ecosystems appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Northern Block and Credivera Announce Strategic Partnership to Accelerate Adoption of Digital Trust Ecosystems appeared first on Northern Block | Self Sovereign Identity Solution Provider.


PingTalk

What Is Strong Authentication? Processes & Best Practices

Understand the key components of strong authentication, and best practices to protect your organization and customer data.

A strong authentication process confirms your user's identity before allowing access to digital assets. It keeps your company information safe, as well as data belonging to your customers or employees. And while a single password used to be enough to ensure online account security, this is no longer the case as fraudster attacks become increasingly sophisticated.

 

Learn what strong authentication is and how you can incorporate best practices into your user journey.


TBD

Why Broken Links Are Costing You Brand Deals (And How to Fix It)

Decentralized Identifiers (DIDs) and service endpoints can keep your links accessible even during third-party outages. Ensuring you're in full control of your online presence.

Have you ever watched a creator’s video and thought, "Where did she get that top?" or "I need that protein powder"? You scroll through the comments, only to see the infamous "link in my bio" comment. You rush to click the link, and you're hit with-page not found 😒. I remember once being so desperate that I took a screenshot of the item and reverse-searched it on Google Images. I found something similar but not what I wanted. SO frustrating. Eventually, I gave up and kept on scrolling.

Now, imagine how many potential sales that creator lost because a third-party platform’s server was down. Their metrics won't even reflect those missed opportunities, making it harder to secure brand deals. Who actually has time for that? That’s when I realized I could use Decentralized Identifiers (DIDs) to create my own decentralized link hub utilizing service endpoints. With this setup, all my links and contact info are stored in one place—owned and controlled by me. Even if a service that houses all my links goes down, my links will always be accessible because they’re not reliant on any external platforms to display them. I’m sharing this in hopes that fellow creators won’t miss out on potential brand deals, and I won't have to cry over a top I never got to buy.

Before I show you exactly how you can create your own decentralized link hub, lets answer some of the questions you're probably asking yourself.

What are Decentralized Identifiers (DIDs)?

So, what exactly is a Decentralized Identifier, or DID? Think of it as your username—the one source of truth for everything you do online—except this one is owned and controlled entirely by you. It’s a unique "address", thats verifiable and doesn’t rely on any central authority like Facebook, Google, or any other service. Instead, DIDs give you the freedom to manage your own identity online, without needing to trust a single platform to store or validate your information.

In the context of a decentralized link hub, your DID becomes the hub for all your important links. It’s not tied to any third-party service, which means you never have to worry about followers scrolling simply because your link page isn't working. When you update your links, you only need to do it once, as they're tied to your DID—so they stay consistent across all your social platforms, giving you full control. When you update your links, they stay up-to-date across the web because again they’re tied to your DID—giving you full control.

How are Service Endpoints going to help me?

Now, let’s cover what service endpoints are. These might sound technical, but they’re actually pretty simple—think of them like your digital address/phone book. Remember those huge yellow books you used to sit on at the hair salon? They were filled with phone numbers and addresses, making it easy to find and contact people. Well, service endpoints are kind of like that, except they’re the digital "addresses" for different parts of your online identity. These could be links to your Instagram profile, website, direct messages, or even your affiliate links.

These endpoints live in your DID document. So instead of relying on centralized services like Linktree, your DID acts as the home for all your important links. So when someone resolves your DID, they can access the service endpoints that you’ve decided to share.

You can also easily update and delete these links anytime you need to again without relying on any third-party platform to keep those connections working.

The fix: let's create a decentralized Link Hub

If you’re more of a visual learner, check out my YouTube short where I show you exactly how. For this example we're going to create a DID with two service endpoints. One pointing to my LinkedIn and the other pointing to my X profile.

Step 1: Import web5/dids package

import {DidDht} from '@web5/dids'

Step 2: Create DID with service endpoints

const myBearerDid = await DidDht.create({
options:{
publish: true,
services: [
{
id: 'LinkedIn',
type: 'professional',
serviceEndpoint: 'https://www.linkedin.com/in/ebonylouis'
},
{
id: 'X',
type: 'personal',
serviceEndpoint: 'https://x.com/EbonyJLouis'
}
]
}
});

Now that we've created your DID with service endpoints leading to your LinkedIn and X profiles.

Step 3: Lets print our entire DID also know as a BearerDid to see our DID document where these service endpoints can be found:

console.log(myBearerDid)
warning

It is important to never share your full BearerDID, it contains private keys that only you should have access to. The holder of these keys can perform private key operations, like signing data. Check out this Key Management Guide to learn how to properly manage your DID keys.

Output:

my bearerDid BearerDid {
uri: 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey',
document: {
id: 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey',
verificationMethod: [ [Object] ],
authentication: [
'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0'
],
assertionMethod: [
'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0'
],
capabilityDelegation: [
'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0'
],
capabilityInvocation: [
'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0'
],
service: [ [Object], [Object] ]
},
metadata: { published: true, versionId: '1729705713' },
keyManager: LocalKeyManager {
_algorithmInstances: Map(1) {
[class EdDsaAlgorithm extends CryptoAlgorithm] => EdDsaAlgorithm {}
},
_keyStore: MemoryStore { store: [Map] }
}
}

This output contains your DID string(uri) thats your "username" along with the services array and some authentication and verification methods. To learn more refer to this DID Document Guide.

Step 4: Now lets look closely at just our serviceEndpoint array:

console.log("personal link hub", myBearerDid.document.service || "No Services Found");

Output:

decentralized link hub [
{
id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn',
type: 'professional',
serviceEndpoint: 'https://www.linkedin.com/in/ebonylouis'
},
{
id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#X',
type: 'personal',
serviceEndpoint: 'https://x.com/EbonyJLouis'
}
]
How do I share these links?

Now that your DID is in your bio, how do your followers access your links? It's simple- they just need to resolve your DID to see a full list of your shared links:

info

The resolving of your DID will differ depending on the DID method used to create the DID. In this example we are using the DHT DID method:

// DID in your bio
const didDhtUri = 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y'

// resolve the DID
const resolvedDhtDid = await DidDht.resolve(didDhtUri);

// access the DID Document's service links
const dhtDidDocument = resolvedDhtDid.didDocument.service;

console.log(dhtDidDocument)

Output:

[
{
id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn#LinkedIn',
type: 'professional',
serviceEndpoint: [ 'https://www.linkedin.com/in/ebonylouis' ]
},
{
id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn#X',
type: 'personal',
serviceEndpoint: [ 'https://x.com/EbonyJLouis' ]
}
]

As you can see, we’ve succesfully set up our service endpoints to point to both my LinkedIn and X accounts. Now it’s your turn to secure the bag, create your own decentralized Link hub! And if you tweet about it, don’t forget to tag me.

To learn more about Decentralized Identity check out TBDs Docs.

Sunday, 27. October 2024

KuppingerCole

The Human Factor: Addressing Mental Health in Cybersecurity

Burnout, fatigue, depression: This episode is all about the mental health challenges faced by cybersecurity professionals, highlighting the increasing pressures and responsibilities in the field. Matthias invited experts Sarb Sembhi and Dr. Kashyap Thimmaraju to discuss the impact of these challenges on individuals and organizations, emphasizing the need for better support systems, transparency, a

Burnout, fatigue, depression: This episode is all about the mental health challenges faced by cybersecurity professionals, highlighting the increasing pressures and responsibilities in the field. Matthias invited experts Sarb Sembhi and Dr. Kashyap Thimmaraju to discuss the impact of these challenges on individuals and organizations, emphasizing the need for better support systems, transparency, and proactive strategies to promote mental well-being in the cybersecurity industry.

Mental Health in Cybersecurity Foundation: https://www.virtuallyinformed.com/mhincs 

LinkedIn Group: https://www.linkedin.com/groups/12989900/ 

The Mental Health in Cybersecurity Charter: https://www.virtuallyinformed.com/mhincs-foundation-charter 

Contact the Mental Health in Cybersecurity Foundation 

Research: research@mhincs-foundation.org

Community of Practice: cop@mhincs-foundation.org



Saturday, 26. October 2024

Innopay

INNOPAY at SEFA Career Week

INNOPAY at SEFA Career Week from 25 Nov 2024 till 25 Nov 2024 Trudy Zomer 26 October 2024 - 18:45 Amsterdam, The Netherlands 52.354731843629, 4.9039604 INNOPAY is excited to participate in SEFA
INNOPAY at SEFA Career Week from 25 Nov 2024 till 25 Nov 2024 Trudy Zomer 26 October 2024 - 18:45 Amsterdam, The Netherlands 52.354731843629, 4.9039604

INNOPAY is excited to participate in SEFA Career Week in Amsterdam! Join us for an insightful day filled with opportunities to learn more about our work and engage with our team.

Event highlights: Interactive case solving: Participate in a hands-on case-solving session that gives you a glimpse into real-world challenges we tackle at INNOPAY. Ladies' dinner: In the evening, we invite female students to a special dinner, providing a relaxed atmosphere for networking, discussions, and insights into career opportunities.
 

This event is perfect for students eager to gain practical experience and make valuable connections. We look forward to meeting you and exploring how you can be part of the INNOPAY journey!

Don’t miss this unique opportunity to be part of the event. To apply, please visit the SEFA Career Week website. We look forward to meeting you and sharing more about the exciting journey at INNOPAY!


INNOPAY & Oliver Wyman Art Night

INNOPAY & Oliver Wyman Art Night from 14 Nov 2024 till 14 Nov 2024 Trudy Zomer 26 October 2024 - 18:36 Frankfurt 50.121329352631, 8.6365638 Join us for an unforgettable evening at the first-
INNOPAY & Oliver Wyman Art Night from 14 Nov 2024 till 14 Nov 2024 Trudy Zomer 26 October 2024 - 18:36 Frankfurt 50.121329352631, 8.6365638

Join us for an unforgettable evening at the first-ever INNOPAY & Oliver Wyman Art Night in Frankfurt! This unique career event brings together INNOPAY and Oliver Wyman for an inspiring night of art, networking, and insight into our work and culture.

What to expect: Company presentations: Discover what makes INNOPAY and Oliver Wyman leaders in our fields, with presentations that highlight our vision, projects, and career opportunities. Dinner & networking: Enjoy a delicious dinner while meeting our team members and learning about the roles and paths available with us. Canvas Painting: Unleash your creativity with a guided canvas painting session, followed by drinks in a relaxed, informal setting.

 

This is the perfect opportunity to learn more about our companies, meet potential colleagues, and explore how your future career could begin with us. Interested? Sign up here.

Don’t miss this unique career experience! We can’t wait to see you there.

Friday, 25. October 2024

paray

The Personal Financial Data Rights Rule

On October 22, 2024, the Consumer Financial Protection Bureau (“CFPB”) finalized the Personal Financial Data Rights rule, which moves the United States closer to “an open banking system in which consumers, not dominant firms, control their data.”  The CFPB is generally tasked with “promoting fair, transparent, and competitive markets for consumer financial products and services.” … Continue re
On October 22, 2024, the Consumer Financial Protection Bureau (“CFPB”) finalized the Personal Financial Data Rights rule, which moves the United States closer to “an open banking system in which consumers, not dominant firms, control their data.”  The CFPB is generally tasked with “promoting fair, transparent, and competitive markets for consumer financial products and services.” … Continue reading The Personal Financial Data Rights Rule →

KuppingerCole

Dec 10, 2024: From Detection to Recovery: PAM's Crucial Role in Incident Management

In an era where cyber threats are constant, organizations must prepare not for if a breach will happen but when. The urgency to identify, address, and bounce back from security incidents has never been greater. Privileged Access Management (PAM) plays a vital role in bolstering defenses and streamlining responses to these incidents. However, many organizations still struggle to unlock its full bene
In an era where cyber threats are constant, organizations must prepare not for if a breach will happen but when. The urgency to identify, address, and bounce back from security incidents has never been greater. Privileged Access Management (PAM) plays a vital role in bolstering defenses and streamlining responses to these incidents. However, many organizations still struggle to unlock its full benefits, leaving critical vulnerabilities exposed.

PingTalk

CMMC auditors see streamlines with IAM for DIB compliance

CMMC auditors recognize IAM as crucial for securing DIB compliance. Identity management controls can protect CUI, national security, and affect contract revenues.

Certified Cybersecurity Maturity Model Certification (CMMC) auditors know firsthand how identity management is a critical linchpin in maintaining security. When assessing a Defense Industrial Base (DIB) supplier's compliance with CMMC controls, identity and access management (IAM) is often one of the areas where they find significant vulnerabilities. The stakes are high: a misstep here could compromise sensitive Controlled Unclassified Information (CUI) and, ultimately, national security and may jeopardize a company’s reputation. And additionally, DIB revenues can suffer if they fail the audit and do not qualify for lucrative contracts.

 

In this blog, we share an auditor’s concerns and insights when evaluating a typical DIB’s identity solution, hoping to help others understand how to meet the CMMC requirements more effectively to protect against cyberattacks as well as land and maintain government contracts.

Thursday, 24. October 2024

HYPR

Fake IT Workers: How HYPR Stopped a Fraudulent Hire

Since 2022, the FBI and other agencies have been sounding the alarm about North Koreans posing as US or other non-North Korean based IT workers and infiltrating companies. In July, security firm KnowBe4 publicly revealed that they unknowingly hired a fake IT worker from North Korea. Fortunately they detected and blocked access as he attempted to load malware onto his system-connected la

Since 2022, the FBI and other agencies have been sounding the alarm about North Koreans posing as US or other non-North Korean based IT workers and infiltrating companies. In July, security firm KnowBe4 publicly revealed that they unknowingly hired a fake IT worker from North Korea. Fortunately they detected and blocked access as he attempted to load malware onto his system-connected laptop. Since then, similar stories have flooded in. Last week, reports surfaced that a fake North Korean IT worker hired by an unnamed company stole proprietary data and demanded a ransom payment in order to keep the hack secret.

However, the threat from interview fraud and fake employees goes far beyond the North Korean schemes. Moreover, large enterprises are not the only targets. At HYPR, we recently experienced an attempted fraud event, and thwarted it through our Identity Assurance platform. In support of bringing awareness to the market and other businesses, HYPR has elected to publicly report our experience and how we mitigated it.

Outing the Imposter 

After multiple rounds of live video interviews, HYPR decided to extend a contract to a  European software engineer through a Technology Services contracting firm. This prospective new hire — let’s call him “John Doe” — was required to go through HYPR’s new joiner security processes. This is in addition to the background checks already performed during the candidate pre-hire screening. On October 17, HYPR began “John’s” day 1 onboarding and credentialing. 

Onboarding an Employee at HYPR

At HYPR, we use our HYPR Affirm solution to conduct multiple verifications and checks for new hires before issuing credentials. Verifications may include possession checks, biometrics, telemetry, document authentication, video verification and other identifiers. Affirm is configurable to the verification level required by an organization, based on its needs and the role of an individual they hire. Below is the flow we typically use at HYPR:

The Warning Signs

The new hire check threw up several red flags. Although John’s phone number was verified, a location check did not match the information he had provided.

John’s passport passed the document review, however the facial verification check indicated discrepancies between the passport photo and face scan. The liveness detection test also failed.

Alarm bells began chiming for the team, but the prospective employee claimed that he was having technical difficulties with the document uploading and verification part of the onboarding. 

HYPR encouraged him to try the process again. A second attempt an hour later now showed a different location and a different browser language.

The final step was live video verification to confirm that this was indeed the same person we originally interviewed. At this point John dropped, and emailed that he could not turn on his video due to issues with his camera. We contacted our Technology Services provider to explain the warning signs we were seeing. The next day, John informed our provider that he had found a different opportunity and decided not to continue with onboarding at HYPR.

In the ordinary course of events, onboarding employees with Affirm is efficient and seamless. If red flags begin to manifest, however, the friction is increased to detect other risk indicators and prevent a fraudulent hire from proceeding.

Onboarding With HYPR Affirm

Tying Credential Provisioning to Identity Verification

It is critical to note that at no point in the onboarding process was “John” issued credentials to access any HYPR systems. This is because HYPR uses multi-factor verification (MFV) to issue phishing-resistant MFA credentials. This ensures an account is always tied to a verified, real-world identity.

By contrast, in the KnowBe4 case, they shipped the fake IT worker a provisioned FIDO-enabled YubiKey so he could log into their network. This meant that the North Korean operative had at least limited access from the get go. He was caught and blocked only after he did something that was detectable by security monitoring tools. Had he been a highly sophisticated hacker, he may have been able to bypass some of those tools.

Key Takeaways The Fake Employee Problem Goes Beyond North Korea: It’s not just North Korea perpetrating employee fraud schemes and anyone can be a target. Tie Credential Issuance to Identity Verification: Don’t rely on checks done during the interview or HR onboarding. Implement a multi-factor verification process to tie real world identity to the digital identity during the provisioning process.  Implement Video-Based Verification: Video-based verification is a critical identity control, and not just at onboarding. Microsoft recently announced that it’s using video-based identity verification for critical credential recovery processes to combat social engineering threats. A Unified Identity Assurance Approach: Experts increasingly recommend that organizations implement a holistic Identity Assurance approach that unifies  phishing-resistant passwordless authentication, adaptive risk mitigation and automated identity verification.


Ocean Protocol

DF112 Completes and DF113 Launches

Predictoor DF112 rewards available. DF113 runs Oct 24— Oct 31, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 112 (DF112) has completed. DF113 is live today, Oct 24. It concludes on October 31. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF112 rewards available. DF113 runs Oct 24— Oct 31, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 112 (DF112) has completed.

DF113 is live today, Oct 24. It concludes on October 31. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF113 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF113

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF112 Completes and DF113 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

The Dawn of Open Banking in the U.S.: How Identity Powers CFPB’s Personal Financial Data Rights Rule

Enable CFPB personal financial data rights rule with digital identity to power US open banking.

BlueSky

Bluesky Announces Series A to Grow Network of 13M+ Users

Bluesky now exceeds 13 million users, the AT Protocol developer ecosystem continues to grow, and we’ve shipped highly requested features like direct messages and video.

Bluesky now exceeds 13 million users, the AT Protocol developer ecosystem continues to grow, and we’ve shipped highly requested features like direct messages and video. We’re excited to announce that we’ve raised a $15 million Series A financing led by Blockchain Capital with participation from Alumni Ventures, True Ventures, SevenX, Amir Shevat of Darkmode, co-creator of Kubernetes Joe Beda, and others.

Our lead, Blockchain Capital, shares our philosophy that technology should serve the user, not the reverse — the technology being used should never come at the expense of the user experience. Additionally, this fund has a uniquely deep understanding of our decentralized foundation and has extensive experience building developer ecosystems, so it’s a natural partnership as we continue to invest in the ATmosphere (the AT Protocol developer ecosystem). This does not change the fact that the Bluesky app and the AT Protocol do not use blockchains or cryptocurrency, and we will not hyperfinancialize the social experience (through tokens, crypto trading, NFTs, etc.). To ensure we and our users benefit fully from this expertise, partner Kinjal Shah will join our board. Kinjal shares our vision for a social media ecosystem that empowers the people who use it, and we are glad to have her support as we invest in driving the adoption of decentralized social.

With this fundraise, we will continue supporting and growing Bluesky’s community, investing in Trust and Safety, and supporting the ATmosphere developer ecosystem. In addition, we will begin developing a subscription model for features like higher quality video uploads or profile customizations like colors and avatar frames. Bluesky will always be free to use — we believe that information and conversation should be easily accessible, not locked down. We won’t uprank accounts simply because they’re subscribing to a paid tier.

Additionally, we’re proud of our vibrant community of creators, including artists, writers, developers, and more, and we want to establish a voluntary monetization path for them as well. Part of our plan includes building payment services for people to support their favorite creators and projects. We’ll share more information as this develops.

Bluesky’s open technology, the AT Protocol, makes a whole ecosystem of apps possible. We’re excited that developers have already begun building their own applications with totally different purposes from the Bluesky app. For example, Smoke Signal is an events app, Frontpage is a web forum, and Bluecast is an audio app (that includes karaoke with licensed songs)! We hypothesize that monetization strategies like subscriptions, domain-name registrations, and payments to creators will enable these independent apps to grow as well.

With every month that passes, the need for an open social network becomes more clear. We’re very excited about where we’re headed — we’re building not just another social app, but an entire network that gives users freedom and choice. Thank you for joining us.

What have we done since our last fundraise?

Since raising our seed round last year, we have:

Grown Bluesky from an invite-only app with 1M users to an open app serving more than 13M people! We’ve welcomed millions of people from the United States, Brazil, Japan, the United Kingdom, Germany, and more. Launched federation for self-hosters and developers. Now there are over 1,000 other personal data servers (PDS) outside of Bluesky. Launched custom feeds, making algorithmic choice a reality. Now there are over 50k+ feeds on Bluesky. Invested heavily in anti-harassment tooling and Trust and Safety. Built labeling services and opened the ability for anyone to run foundational pieces of stackable moderation. Additionally, open-sourced Ozone, a moderation tool. Shipped highly-requested features like direct messaging, GIFs, and video. Pioneered novel features like starter packs to help communities find each other in a privacy-preserving way. Partnered with organizations like Buffer to add Bluesky integrations and Namecheap to sell domains. Announced $20k in developer grants and supported the growth of the ATmosphere through additional developer documentation, talks, and partnerships. Much more, including a new logo and a public web interface!

Traditional social media companies have enclosed the online commons, locked down their APIs to shut out independent developers, and deployed black box algorithms that leave us guessing. This era of old social is over — at Bluesky, we’re returning choice and power to you.

Wednesday, 23. October 2024

liminal (was OWI)

The Key to a Passwordless Future Falls in the Hands of the Enterprise Chief Information Security Officer

The post The Key to a Passwordless Future Falls in the Hands of the Enterprise Chief Information Security Officer appeared first on Liminal.co.

KuppingerCole

Jan 23, 2025: Identity-Centric Zero Trust Infra Access

Given the rapid advancements in technology, infrastructure security must evolve beyond traditional perimeter defenses. The rise of cloud applications, remote workforces, and distributed environments necessitates a shift towards identity-centric Zero Trust access. This approach removes the notion of network segments and focuses on granting secure access to users based on dynamic policies and identit
Given the rapid advancements in technology, infrastructure security must evolve beyond traditional perimeter defenses. The rise of cloud applications, remote workforces, and distributed environments necessitates a shift towards identity-centric Zero Trust access. This approach removes the notion of network segments and focuses on granting secure access to users based on dynamic policies and identity risk, ensuring only authorized users interact with critical resources.

Okta

How to Build Secure Okta Node.js Integrations with DPoP

Integrating with Okta management API endpoints might be a good idea if you are trying to read or manage Okta resources programmatically. This blog demonstrates how to securely set up a node application to interact with Okta management API endpoints using a service app. Okta API management endpoints can be accessed using an access token issued by the Okta org authorization server with the approp

Integrating with Okta management API endpoints might be a good idea if you are trying to read or manage Okta resources programmatically. This blog demonstrates how to securely set up a node application to interact with Okta management API endpoints using a service app.

Okta API management endpoints can be accessed using an access token issued by the Okta org authorization server with the appropriate scopes needed to make an API call. This can be either through authorization code flow for the user as principal or client credentials flow for a service as principal.

For this blog, we will examine the OAuth 2.0 client credentials flow. Okta requires the private_key_jwt token endpoint authentication type for this flow. Access tokens generated by the Okta org authorization server expire in one hour. Any client can call Okta API endpoints with the token during this hour.

How do you make OAuth 2.0 access tokens more secure?

Increase security by constraining the token to the sender. By constraining the token sender, the resource server knows every request originates from the original client that initially requested the token. OAuth 2.0 Demonstrating Proof of Possession (DPoP) is a way to achieve this, as explained in this rfc. You can read more about DPoP in this post:

Elevate Access Token Security by Demonstrating Proof-of-Possession

Protect your OAuth 2.0 access token with sender constraints. Learn about possession proof tokens using DPoP.

Alisa Duncan

To demonstrate this, we will first set up a node application with a service app without requiring DPoP. Then, we’ll add the DPoP constraint and make the necessary changes in our app to implement it.

Table of Contents

How do you make OAuth 2.0 access tokens more secure? Create a service app with OAuth 2.0 client credentials without DPoP Add OAuth 2.0 and OpenID Connect (OIDC) to your Node.js service application Configure OAuth 2.0 in the Node.js service Create an OAuth 2.0-compliant Node.js service app Secure access tokens by adding DPoP to the Node.js service Experiment with DPoP and API scopes for Okta API and custom resource server calls Learn more about Okta Management API, DPoP, and OAuth 2.0 Create a service app with OAuth 2.0 client credentials without DPoP

Prerequisites

You’ll need the following tools:

Node.js v18 or greater IDE (I used VS Code) Terminal window (I used the integrated terminal in VS Code) Add OAuth 2.0 and OpenID Connect (OIDC) to your Node.js service application

Before you begin, you’ll need a free Okta developer edition account. Sign up for a free Workforce Identity Cloud Developer Edition account if you don’t already have one.

Open your Okta dashboard in a browser. Navigate to Applications > Applications. Select API Services and press Next. Name your application and press Save.

In the General tab, note the Client ID value and your Okta domain. You can find the Okta domain by expanding the settings menu in the toolbar. You need these values for your application configuration. Press edit in the Client Credentials section and follow these steps: Change the Client authentication to Public Key / Private Key In the PUBLIC KEYS section, press the Add key button. Click Generate new key to have Okta generate a new key. Save the private key (in PEM format) in a file called cc_private_key.pem for later use. Press Save

In General Settings section, press edit and make the following changes:

Disable Proof of possession > Require Demonstrating Proof of Possession (DPoP) header in token requests Press Save

Navigate to the Okta API Scopes tab and grant the okta.users.read scope.

In the Admin roles tab, press Edit assignments. Find the Read-only Administrator in the Role selection menu, and press the Save Changes button.

Those are all of the changes required in Okta until you re-enable DPoP.

Configure OAuth 2.0 in the Node.js service

Create a project directory for local development named okta-node-dpop. Open the project directory in your IDE. Create a file called .env file to the project root directory and add the following configuration settings:

OKTA_ORG_URL= OKTA_CLIENT_ID={yourClientID} OKTA_SCOPES=okta.users.read OKTA_CC_PRIVATE_KEY_FILE=./assets/cc_private_key.pem

Save the private key file from the earlier step as assets/cc_private_key.pem in the root directory.

Create an OAuth 2.0-compliant Node.js service app

Open a terminal window in the project directory and run npm init to create the scaffolding. Press Enter to accept all defaults.

Install dependencies for the project by running:

npm i dotenv@16.4.5 jsonwebtoken@9.0.2

Create an oktaService.js file in the project root. We’ll add the basic foundation of authenticating and calling Okta endpoints in this file. This file contains three key functions:

oktaService.authenticate(..) method gets an access token by: Generating a private key JWT required for authenticating and signs it using a keypair registered in the Okta application Generating the token request to Okta org authorization server Retrieving and stores the access token for future calls Note - This token is valid for one hour by default at the time of writing this article oktaService.managementApiCall(..) method makes the Okta management API calls and adds the necessary headers and tokens to enable the request oktaHelper contains utility methods to store okta configuration, access token, generating private key JWT, generating token request

Add the following code to the oktaService.js file:

const fs = require("fs"); const crypto = require("crypto"); const jwt = require("jsonwebtoken"); require("dotenv").config(); // Loads variables in .env file into the environment const oktaHelper = { oktaDomain: process.env.OKTA_ORG_URL || "", // Okta domain URL oktaClientId: process.env.OKTA_CLIENT_ID || "", // Client ID of API service app oktaScopes: process.env.OKTA_SCOPES || "", // Scopes requested - Okta management API scopes ccPrivateKeyFile: process.env.OKTA_CC_PRIVATE_KEY_FILE || "", // Private Key for signing Private key JWT ccPrivateKey: null, accessToken: "", getTokenEndpoint: function () { return `${this.oktaDomain}/oauth2/v1/token`; }, // Token endpoint getNewJti: function () { return crypto.randomBytes(32).toString("hex"); }, // Helper method to generate new identifier generateCcToken: function () { // Helper method to generate private key jwt let privateKey = this.ccPrivateKey || fs.readFileSync(this.ccPrivateKeyFile); let signingOptions = { algorithm: "RS256", expiresIn: "5m", audience: this.getTokenEndpoint(), issuer: this.oktaClientId, subject: this.oktaClientId, }; return jwt.sign({ jti: this.getNewJti() }, privateKey, signingOptions); }, tokenRequest: function (ccToken) { // generate token request using client_credentials grant type return fetch(this.getTokenEndpoint(), { method: "POST", headers: { Accept: "application/json", "Content-Type": "application/x-www-form-urlencoded", }, body: new URLSearchParams({ grant_type: "client_credentials", scope: this.oktaScopes, client_assertion_type: "urn:ietf:params:oauth:client-assertion-type:jwt-bearer", client_assertion: ccToken, }), }); }, }; const oktaService = { authenticate: async function () { // Use to authenticate and generate access token if (!oktaHelper.accessToken) { console.log("Valid access token not found. Retrieving new token...\n"); let ccToken = oktaHelper.generateCcToken(); console.log(`Using Private Key JWT: ${ccToken}\n`); console.log(`Making token call to ${oktaHelper.getTokenEndpoint()}`); let tokenResp = await oktaHelper.tokenRequest(ccToken); let respBody = await tokenResp.json(); oktaHelper.accessToken = respBody["access_token"]; console.log( `Successfully retrieved access token: ${oktaHelper.accessToken}\n` ); } return oktaHelper.accessToken; }, managementApiCall: function (relativeUri, httpMethod, headers, body) { // Construct Okta management API calls let uri = `${oktaHelper.oktaDomain}${relativeUri}`; let reqHeaders = { Accept: "application/json", Authorization: `Bearer ${oktaHelper.accessToken}`, ...headers, }; return fetch(uri, { method: httpMethod, headers: reqHeaders, body, }); }, }; module.exports = oktaService;

Add a new file named app.js in the project root folder. This is the entry point for running our Node.js service application. In this file, we’ll do the following:

Import oktaService Create an async wrapper to execute asynchronous code Authenticate to Okta by calling oktaService.authenticate() Validate the previous step by listing users using a GET call to Okta’s /api/v1/users endpoint

Paste the following code into the app.js file:

const oktaService = require('./oktaService.js'); (async () => { await oktaService.authenticate(); let usersResp = await oktaService.managementApiCall('/api/v1/users', 'GET'); if(usersResp.status == 200) { let respBody = await usersResp.json(); console.log(`Users List: ${JSON.stringify(respBody)}\n`); } else { console.log('API error', usersResp); } })();

Next, update this as the entry point. In the package.json file, update the scripts property with the following:

"scripts": { "start": "node app.js" }

This gives us an easy way to run the app. Run the app using npm start. You should see a list of console logs:

Valid access token not found. Retrieving new token... Using Private Key JWT: eyJh........ Making token call to https://........../oauth2/v1/token Successfully retrieved access token: eyJ.................. Users List: [.........]

If you receive any errors, this is a good time to troubleshoot and resolve issues before adding DPoP.

Secure access tokens by adding DPoP to the Node.js service

Why isn’t OAuth 2.0 client credential flow enough?

Our setup used the client_credentials grant type to authenticate and get an access token. If someone gets hold of the private_key_jwt, they cannot replay it beyond expiration (I reduced it to 5 minutes to shorten this window). However, if someone gets ahold of the access token, they can use it for up to 1 hour, which is the default expiration time of an access token.

Constraining the token sender is one way to make the access token more secure. How can you do that? By adding the Demonstrating Proof of Possession (DPoP) OAuth extension method to the access token interaction. The technique adds a sender-generated token for each call it makes. Doing so prevents replay attacks even before tokens expire since each call needs a fresh DPoP token. Here is the detailed flow:

You’ll enable DPoP in Okta application settings to experiment with sender-constrained tokens. Open the Okta Admin Console in your browser and navigate to Application > Application to see the list of Okta applications in your Okta account. Open the service application to edit it.

In your service app’s General Settings section, change Proof of possession > Require Demonstrating Proof of Possession (DPoP) header in token requests to true. Then click Save.

You need a new public/private key pair to sign the DPoP proof JWT. If you know how to generate one, feel free to skip this step. I used the following steps to generate it:

Go to JWK generator Select the following and then click Generate. Key Use: Signature Algorithm: RS256 Key ID: SHA-256 Show X.509: Yes Copy the Public Key (JSON format) and save it to assets/dpop_public_key.json Copy the Private Key (X.509 PEM format) (Do not click Copy to Clipboard. This will copy as a single line, which will not work with the following steps. Instead, copy the value manually and save it) and save it to assets/dpop_private_key.pem

Now that you have a new keypair for DPoP, you’ll add the variables to the project. In the .env file, add the new file paths:

.... OKTA_SCOPES=okta.users.read OKTA_CC_PRIVATE_KEY_FILE=./assets/cc_private_key.pem OKTA_DPOP_PRIVATE_KEY_FILE=./assets/dpop_private_key.pem OKTA_DPOP_PUBLIC_KEY_FILE=./assets/dpop_public_key.json

Add the DPoP-related code to oktaService.js. Add the key files to config. We can use this while adding DPoP to our methods:

const oktaHelper = { ....... ccPrivateKeyFile: process.env.OKTA_CC_PRIVATE_KEY_FILE || '', // Private Key for signing Private key JWT ccPrivateKey: null, // Add this code ====================== dpopPrivateKeyFile: process.env.OKTA_DPOP_PRIVATE_KEY_FILE || '', // Private key for signing DPoP proof JWT dpopPublicKeyFile: process.env.OKTA_DPOP_PUBLIC_KEY_FILE || '', // Public key for signing DPoP proof JWT dpopPrivateKey: null, dpopPublicKey: null, // Add above code ====================== accessToken: '', ..... }

Add a helper method to generate a DPoP value. This helper method adds an access token to the DPoP proof JWT header. It’ll construct the JWT based on the format defined in spec.

const oktaHelper = { ..... // Add this as the last attribute of oktaHelper object generateDpopToken: function(htm, htu, additionalClaims) { let privateKey = this.dpopPrivateKey || fs.readFileSync(this.dpopPrivateKeyFile); let publicKey = this.dpopPublicKey || fs.readFileSync(this.dpopPublicKeyFile) let signingOptions = { algorithm: 'RS256', expiresIn: '5m', header: { typ: 'dpop+jwt', alg: 'RS256', jwk: JSON.parse(publicKey) } }; let payload = { ...additionalClaims, htu, htm, jti: this.getNewJti() }; return jwt.sign(payload, privateKey, signingOptions); } };

Next, add the DPoP proof token to the tokenRequest method. This method gets the newly generated DPoP proof token and adds it to the token request as a header.

// Add dpopToken as a new parameter tokenRequest: function(ccToken, dpopToken) { // generate token request using client_credentials grant type return fetch(this.getTokenEndpoint(), { method: 'POST', headers: { Accept: 'application/json', 'Content-Type': 'application/x-www-form-urlencoded', // New Code - Start DPoP: dpopToken // New Code - End }, ... }); },

Add the following steps to the authenticate method to add DPoP.

Generate a new DPoP proof for POST method and token endpoint Make token call with both private_key_jwt and DPoP jwt Okta adds an extra security measure by adding a nonce to token requests requiring DPoP. This will respond to token requests that don’t include a nonce with the use_dpop_nonce error. Read more about the nonce in the spec. After this step, we’ll generate a new DPoP proof JWT including nonce value in payload Make the token call again with this new JWT

Once we follow these steps, we’ll have a new access token to use in our API call. Let’s implement the steps. Update the authenticate method to the following:

authenticate: async function() { // Use to authenticate and generate access token if(!oktaHelper.accessToken) { console.log('Valid access token not found. Retrieving new token...\n'); let ccToken = oktaHelper.generateCcToken(); console.log(`Using Private Key JWT: ${ccToken}\n`); // New Code - Start let dpopToken = oktaHelper.generateDpopToken('POST', oktaHelper.getTokenEndpoint()); console.log(`Using DPoP proof: ${dpopToken}\n`); // New Code - End console.log(`Making token call to ${oktaHelper.getTokenEndpoint()}`); // Update following line by adding dpopToken parameter let tokenResp = await oktaHelper.tokenRequest(ccToken, dpopToken); let respBody = await tokenResp.json(); // New Code - Start if(tokenResp.status != 400 || (respBody && respBody.error != 'use_dpop_nonce')) { console.log('Authentication Failed'); console.log(respBody); return null; } let dpopNonce = tokenResp.headers.get('dpop-nonce'); console.log(`Token call failed with nonce error \n`); dpopToken = oktaHelper.generateDpopToken('POST', oktaHelper.getTokenEndpoint(), {nonce: dpopNonce}); ccToken = oktaHelper.generateCcToken(); console.log(`Retrying token call to ${oktaHelper.getTokenEndpoint()} with DPoP nonce ${dpopNonce}`); tokenResp = await oktaHelper.tokenRequest(ccToken, dpopToken); respBody = await tokenResp.json(); // New Code - End oktaHelper.accessToken = respBody['access_token']; console.log(`Successfully retrieved access token: ${oktaHelper.accessToken}\n`); } return oktaHelper.accessToken; }

Before proceeding, make sure to enable DPoP in your Okta service application. Now, test the steps by running npm start in the terminal. OOPS! You would have received an access token, but a call to the user’s API failed with a 400 status. We didn’t include the DPoP proof in this API call. With DPoP enabled, we must include a new DPoP proof for every call. This prevents malicious actors from reusing stolen access tokens.

Let’s add some code to include DPoP proof during every API call.

In the oktaService.js file, add a helper method to generate the hash of the access token or ath value. You’ll use this value later to bind access tokens with DPoP proofs:

const oktaHelper = { ....., // Add as the last attribute of oktaHelper object generateAth: function(token) { return crypto.createHash('sha256').update(token).digest('base64').replace(/\//g, '_').replace(/\+/g, '-').replace(/\=/g, ''); } };

A valid DPoP proof JWT includes the access token hash (ath) value. To make this change, update managementApiCall method

managementApiCall: function (relativeUri, httpMethod, headers, body) { // Construct Okta management API calls let uri = `${oktaHelper.oktaDomain}${relativeUri}`; // New Code - Start let ath = oktaHelper.generateAth(oktaHelper.accessToken); let dpopToken = oktaHelper.generateDpopToken(httpMethod, uri, {ath}); // New Code - End // Update reqHeaders object let reqHeaders = { 'Accept': 'application/json', 'Authorization': `DPoP ${oktaHelper.accessToken}`, 'DPoP': dpopToken, ...headers }; return fetch(uri, { method: httpMethod, headers: reqHeaders, body }); }

Run npm start. Voila! You see a list of users!

We successfully authenticated to Okta with a service app demonstrating DPoP and are using this access token and DPoP proof to access Okta Admin Management API endpoints.

Experiment with DPoP and API scopes for Okta API and custom resource server calls

You can download the completed project from the GitHub repository.

Try modifying the project using different Okta API scopes and experimenting with other endpoints. Ensure you give permissions to your service app by assigning appropriate Admin roles. To improve security, you can implement similar protection to your custom resource server endpoints using a custom authorization server and custom set of scopes.

Learn more about Okta Management API, DPoP, and OAuth 2.0

In this post, you accessed Okta management API using a node app and were able to make it more secure by adding DPoP support. I hope you enjoyed it! If you want to learn more about the ways you can incorporate authentication and authorization security in your apps, you might want to check out these resources:

Elevate Access Token Security by Demonstrating Proof-of-Possession Okta Management API reference OAuth 2.0 and OpenID Connect overview Implement OAuth for Okta Configure OAuth 2.0 Demonstrating Proof-of-Possession

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!

Tuesday, 22. October 2024

TBD on Dev.to

How Verifiable Credentials Can Help Combat Fake Online Reviews

The Federal Trade Commission’s (FTC) has introduced a new rule banning fake online reviews. This rule, which penalizes businesses and individuals involved in the sale or purchase of fake reviews, represents a much needed step in promoting trust online. But while enforcement is crucial, there's still a challenge of identifying which reviews are legitimate. This is where Verifiable Credentials (VCs)

The Federal Trade Commission’s (FTC) has introduced a new rule banning fake online reviews. This rule, which penalizes businesses and individuals involved in the sale or purchase of fake reviews, represents a much needed step in promoting trust online. But while enforcement is crucial, there's still a challenge of identifying which reviews are legitimate. This is where Verifiable Credentials (VCs) can provide a solution.

The Problem with Fake Reviews

Fake reviews have been an issue for years, distorting consumer choices. As FTC Chair Lina Khan pointed out, these reviews “pollute the marketplace and divert business away from honest competitors.”

Let's face it, with pretty much every social networking site turning into an online shopping mall, many purchasing decisions are influenced by online reviews. I, personally, do most of my shopping online and I rely very heavily on reviews. I often question which ones are actually real, and if I'm not sure, I often shy away from making the purchase at all - which isn't good for me or the business.

The new FTC rule aims to crack down on this problem by prohibiting reviews from people who don’t exist, those who have no real experience with the product, or those misrepresenting their experience. It also bans businesses from creating or selling fake reviews. While these measures are great, enforcing them effectively presents challenges, especially with AI-generated content.

How Can Verifiable Credentials Help

Verifiable Credentials are digital certificates that prove specific facts about an individual or entity. These credentials are cryptographically signed, making them tamper-proof, and they can be independently verified without relying on a central authority. In the context of online reviews, VCs can establish authenticity.

Here’s how it could work:

When a customer purchases a product or uses a service, the business can issue a VC to confirm their legitimate experience. This can even be attached to their receipt. This credential could serve as proof that the individual has transacted with the business, preventing fake reviews from people with no real experience.

A review platform could require users to attach a VC that verifies they have purchased or used the product before submitting a review. This would eliminate fake reviews from non-customers and ensure that only those with firsthand experience can provide feedback.

Since VCs are cryptographically signed, they cannot be altered or faked. This ensures the integrity of the review content and prevents businesses from modifying or fabricating reviews to boost their reputations.

Benefits for Businesses and Consumers

Consumers would no longer have to second guess the authenticity of reviews. They can trust that every review is tied to a verified, real customer.

By adopting VCs, businesses can ensure they remain compliant with the new FTC rule. They would have a verifiable record that shows they only accept reviews from legitimate customers, protecting themselves from potential penalties.

Early adopters of VCs in their review systems could set themselves apart from competitors. Businesses that champion transparency and fairness by using VCs can build stronger relationships with their customers, enhancing brand loyalty.

Using VCs can automate the verification process, reducing the need for manual review moderation. As AI continues to be used in generating content, including reviews, this automation is key to keeping platforms efficient while maintaining high standards of trust.

Trust But Verify

The FTC’s new rule is a step in the right direction, but to truly tackle the problem of fake reviews, the marketplace needs more than just enforcement. It needs technology that ensures transparency and trust. Verifiable Credentials can provide that assurance, giving businesses and consumers the tools they need to foster a fair, competitive, and honest marketplace. As online commerce continues to grow, adopting VCs could be the key to making reviews a trustworthy resource once again.

If you'd like to get started with Verifiable Credentials, check out our free, open source SDKs!


Indicio

Introducing Indicio Proven Auth — easier, faster, more secure identity access management with Verifiable Credentials

The post Introducing Indicio Proven Auth — easier, faster, more secure identity access management with Verifiable Credentials appeared first on Indicio.
Indicio Proven Auth allows you to quickly configure single sign-on (SSO) so that your customers or end users can login with portable digital identities instead of usernames and passwords.

By Trevor Butterworth

With Gartner Research predicting a massive shift towards decentralized digital identity and verifiable claims, Indicio has launched a simple, powerful solution for any business or organization to benefit from using Verifiable Credentials — Indicio Proven® Auth.

Proven Auth allows you to quickly configure Single Sign-on (SSO) so that your customers or end users can use a Verifiable Credential to login to applications and websites instead of usernames and passwords. This means:

Replacing weak passwords and weak second-factor authentication for better security. No tracking by centralized third-party identity providers. No worries if a federated identity provider goes dark. Reduce steps for authentication in a zero-trust architecture model. Simpler, more secure user experience. Take advantage of the portable digital identity transformation in the European Union (eIDAS, EUDI), the travel sector, and in mobile driver’s licenses.

Unlock the feature-rich technology driving digital transformation

The improved workflow, privacy, and security are enough to justify making the switch — but there’s a lot more feature-rich power to using  Verifiable Credentials for SSO and identity access management.

Get all these features faster and cheaper than conventional identity access management solutions. Comes with Keycloak for identity access management, but is easily configurable to use other software. Combine popular protocols  (e.g. OIDC, SAML) with widely-used policy engines (such as Amazon Verifiable Permissions or Abacus) for role- or user-based authorization decisions based on the attributes of a Verifiable Credential. Unlike conventional identity provision, Proven Auth enables systems to allow access based on credentials they have never seen before provided they trust the source (e.g., government-issued ID).   Credentials can be quickly configured to handle complex information flows, making it easier to implement least-privilege access for zero trust. Verifiable Credentials go beyond the limits of passkeys, do not need to be enrolled,  and they are able to hold contextually useful information that can be shared by consent (simplifying compliance).

How Indicio Proven Auth delivers next-gen SSO, privacy, security, and user experience. 

Conventional SSO requires you to use a third-party identity provider to authenticate access to multiple applications and services. While this saves you from entering a password and username for each session with each service, it still means relying on a subscription to a third-party identity provider and cumbersome password rotation, which add additional expense and unnecessary complexity to the user experience.

For example: if an employer issues a Verifiable Credential to an employee, the employer can be certain it’s their employee accessing an application or system rather than simply trust an outside identity provider. The employee doesn’t need to use or rotate passwords, their access into the company’s systems cannot be stolen or phished, and third-party identity providers aren’t able to track employee login behavior.

Seamless SaaS access

Verifiable Credentials use advanced cryptography for instant, seamless authentication. You can be certain of the source of the credential, you can be certain that it is bound to the person or organization it has been issued to, and you can be certain that the data inside has not been altered.

SaaS applications can be quickly configured to accept a Verifiable Credential instead of a third-party identity provider. All you need to do is issue a Verifiable Credential or decide which Verifiable Credential issuers are valid for accessing your system. When logging into an application, Proven Auth checks to see if the credential issuer is valid and provides the destination system with the necessary data about who you are and what you should have access to. Proven Auth doesn’t need to have seen your credential before to do this.

Combine SSO with secure biometric authentication
For critical security access, Verifiable Credentials are a powerful way to implement biometric access, as a liveness check can be accompanied by the presentation of a biometric template bound to a credential and both compared for instantaneous authentication.

Do more for less

Compared with current approaches to managing identity, privacy, and security, Gartner’s Market Report notes that decentralized identity and Verifiable Credentials  represent “magnitudes of improvement in terms of efficiency, cost and assurance.”

To see how Indicio Proven Auth can transform your identity access management and prepare you to take advantage of a decentralized world, why not book a demo and learn how  Indicio is deploying Verifiable Credential solutions across different sectors for seamless trust.

To learn more about Indicio Proven Auth and verifiable credentials, contact us or visit us at indicio.tech/proven-auth/

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Introducing Indicio Proven Auth — easier, faster, more secure identity access management with Verifiable Credentials appeared first on Indicio.


TBD

How Verifiable Credentials Can Help Combat Fake Online Reviews

With the FTC’s new rule banning fraudulent endorsements, Verifiable Credentials can combat fake online reviews.

The Federal Trade Commission’s (FTC) has introduced a new rule banning fake online reviews. This rule, which penalizes businesses and individuals involved in the sale or purchase of fake reviews, represents a much needed step in promoting trust online. But while enforcement is crucial, there's still a challenge of identifying which reviews are legitimate. This is where Verifiable Credentials (VCs) can provide a solution.

The Problem with Fake Reviews

Fake reviews have been an issue for years, distorting consumer choices. As FTC Chair Lina Khan pointed out, these reviews “pollute the marketplace and divert business away from honest competitors.”

Let's face it, with pretty much every social networking site turning into an online shopping mall, many purchasing decisions are influenced by online reviews. I, personally, do most of my shopping online and I rely very heavily on reviews. I often question which ones are actually real, and if I'm not sure, I often shy away from making the purchase at all - which isn't good for me or the business.

The new FTC rule aims to crack down on this problem by prohibiting reviews from people who don’t exist, those who have no real experience with the product, or those misrepresenting their experience. It also bans businesses from creating or selling fake reviews. While these measures are great, enforcing them effectively presents challenges, especially with AI-generated content.

How Can Verifiable Credentials Help

Verifiable Credentials are digital certificates that prove specific facts about an individual or entity. These credentials are cryptographically signed, making them tamper-proof, and they can be independently verified without relying on a central authority. In the context of online reviews, VCs can establish authenticity.

Here’s how it could work:

When a customer purchases a product or uses a service, the business can issue a VC to confirm their legitimate experience. This can even be attached to their receipt. This credential could serve as proof that the individual has transacted with the business, preventing fake reviews from people with no real experience.

A review platform could require users to attach a VC that verifies they have purchased or used the product before submitting a review. This would eliminate fake reviews from non-customers and ensure that only those with firsthand experience can provide feedback.

Since VCs are cryptographically signed, they cannot be altered or faked. This ensures the integrity of the review content and prevents businesses from modifying or fabricating reviews to boost their reputations.

Benefits for Businesses and Consumers

Consumers would no longer have to second guess the authenticity of reviews. They can trust that every review is tied to a verified, real customer.

By adopting VCs, businesses can ensure they remain compliant with the new FTC rule. They would have a verifiable record that shows they only accept reviews from legitimate customers, protecting themselves from potential penalties.

Early adopters of VCs in their review systems could set themselves apart from competitors. Businesses that champion transparency and fairness by using VCs can build stronger relationships with their customers, enhancing brand loyalty.

Using VCs can automate the verification process, reducing the need for manual review moderation. As AI continues to be used in generating content, including reviews, this automation is key to keeping platforms efficient while maintaining high standards of trust.

Trust But Verify

The FTC’s new rule is a step in the right direction, but to truly tackle the problem of fake reviews, the marketplace needs more than just enforcement. It needs technology that ensures transparency and trust. Verifiable Credentials can provide that assurance, giving businesses and consumers the tools they need to foster a fair, competitive, and honest marketplace. As online commerce continues to grow, adopting VCs could be the key to making reviews a trustworthy resource once again.

If you'd like to get started with Verifiable Credentials, check out our free, open source SDKs!

Monday, 21. October 2024

Indicio

Special Indicio Network promotion celebrating the open-source community

The post Special Indicio Network promotion celebrating the open-source community appeared first on Indicio.
Realize the benefits of your decentralized identity and Verifiable Credential products with the Indicio Network. The world’s only enterprise-grade network for delivering fast and powerful decentralized identity solutions built on Hyperledger Indy.

Gartner predicts that “by 2026, at least 500 million smartphone users will be regularly making verifiable claims using a digital identity wallet built on distributed ledger technology,” and Indicio is leading the way with our global network and suite of industry solutions. 

To celebrate the Linux Foundation Decentralized Trust Member Summit, we’re offering up to 50% off  the Indicio Network transaction endorser writing packages for new customers and node operators— giving you the perfect opportunity to deploy on a decentralized network built and managed to deliver the highest quality performance for enterprise grade solutions at any scale. 

Why Choose the Indicio Network?

The Indicio Network is built on Hyperledger Indy and designed to support a wide range of Verifiable Credential implementations at any scale. It provides a stable home for your solutions no matter your use case, industry, or organization type. As the world shifts to decentralized identity models, the Indicio Network stands out with its advanced support, ease of use, and strong community support. Here’s why many have found success using the Indicio Network:

Privacy-first by design

The Indicio Indy Network makes it possible for governments, enterprises, and organizations to create advanced data-sharing systems built on decentralized identity technologies. Designed and managed to the highest standard of privacy and security, no personal data is written to the public ledger. Ever. This eliminates the need for complicated encryption techniques, drastically reducing the risk of breaches and misuse.

Proven security and scalability

Indicio’s network is operated by nearly two-dozen companies and organizations from around the world, ensuring a robust, resilient platform for the public DIDs that support credential issuance. Credentials that are issued and verified using the Indicio Network operate at lightning speed, no matter the scale.

Seamless interoperability  

The Indicio Network is built for flexibility. Its decentralized architecture built on Hyperledger Indy ensures that you can seamlessly connect and interact across different verifiable credential ecosystems, ensuring interoperability across global markets. Whether you’re in finance, healthcare, education or travel, the Indicio Network allows you to layer verifiable credentials into your existing systems.

Open-source innovation  

Backed by the vibrant Hyperledger community and other open source and open standards projects and bodies, the Indicio Network is continually expanding. Open-source contributions ensure that the network is at the cutting edge of innovation, with developers and businesses working together to build and refine decentralized identity solutions. 

Compliance-ready  

With data privacy regulations like GDPR and EIDAS, data compliance is critical. The Indicio Network ensures you meet these regulations by enabling decentralized verification methods that don’t require collecting or storing personal information — making it easier to meet compliance standards without sacrificing user experience.

There’s no better time to join the Indicio Network

At Indicio, we believe in putting open-source technology at the center of the growing decentralized identity market. That’s why we’re offering up to 50% in honor of the Linux Foundation Decentralized Trust Member Summit — giving our friends across the open-source community a cost-effective way to explore the benefits of our professional network.

Whether you’re looking to create secure authentication systems, streamline reusable Know-Your-Customer (KYC) processes, or build privacy-respecting identity ecosystems, the Indicio Network provides the infrastructure and support you need to scale and succeed.

Don’t miss out—activate your discount today and take advantage of this exclusive offer*. Let’s build the future of decentralized identity together!

* Offer expires on November, 30 2024, new customers and node operators only.

To learn more about Indicio and verifiable credentials, contact us or visit us at Indicio.tech

 

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Special Indicio Network promotion celebrating the open-source community appeared first on Indicio.


IdRamp

Healthcare Account Recovery: Identity Verification with MS Entra ID

Healthcare organizations are facing a cyberattack epidemic, with account takeover attack (ATO) incidents surging at an alarming rate. The post Healthcare Account Recovery: Identity Verification with MS Entra ID first appeared on Identity Verification Orchestration.

Healthcare organizations are facing a cyberattack epidemic, with account takeover attack (ATO) incidents surging at an alarming rate.

The post Healthcare Account Recovery: Identity Verification with MS Entra ID first appeared on Identity Verification Orchestration.

Tokeny Solutions

Institutional Tokenization 3.0: Break Silos

The post Institutional Tokenization 3.0: Break Silos appeared first on Tokeny.
October 2024 Institutional Tokenization 3.0: Break Silos

Since Tokeny started building tokenization solutions in 2017, we have seen financial institutions exploring tokenization of assets in many different ways. The evolution has unfolded in three main phases, each addressing the limitations of the previous one.

Tokenization 1.0 – Permissioned networks

Initially, institutions turned to permissioned blockchains for tokenizing assets, prioritizing full control over the network to control tokens. However, this approach quickly exposed limitations in terms of scalability and interoperability, making it difficult for tokens to interact with external applications.

Tokenization 2.0 – ERC-20 Permissionless Tokens + Wallet Whitelists

To solve these issues, institutions started to turn to ERC-20 tokens with wallet whitelists on public blockchains. While this allowed some control over the distribution of tokens, it introduced new compliance and scalability problems: wallets are not linked to identities on the blockchain, so onchain ownership records became unreliable.

It made cross-platform distribution and onchain settlement complex from a compliance perspective because compliance and transfer rules were enforced off-chain, keeping tokens confined within a single platform.

Tokenization 3.0 – ERC-3643, Identity-Based Permissioned Tokens

The open sourcing of ERC-3643 in 2022 introduced identity-based permissioned tokens, ensuring that ownership and compliance remain reliable in every situation.

Interoperability, Not Competition: A common misconception is that using ERC-3643 means competing with other token standards. However, ERC-3643 is built for interoperability within the blockchain ecosystem. By being fully compatible with ERC-20 tokens, assets tokenized using ERC-3643 can integrate seamlessly with existing wallets, DeFi platforms, and analytics tools. Modular and Composable: ERC-3643 allows projects to start with a flexible compliance framework and expand its functionality through composability. This modular approach enables projects to combine additional smart contracts (e.g., automated capital calls, dividend allocation, …) to meet specific needs. Eliminating Single Points of Failure: ERC-3643 links token ownership to identity addresses instead of wallets. Authorized parties validate an investor’s eligibility and issue proofs onchain to their identity addresses, ensuring that compliance and ownership records are always tied to a verified identity. Wallets and platforms won’t be the single points of failure as the ownership remains secure and verifiable through the onchain identity. Issuer Control, Not Platform Lock-In: As transfer rules are enforced onchain, issuers and their appointed agents maintain control of their tokens. They always have real-time insights on who owns what without relying on distributors’ sub-ledgers. With this approach, issuers are no longer restricted to tokenization platform silos. They represent their assets onchain, appoint agents, and activate distribution channels.

ERC-3643 paves the way for breaking tokenization silos, enabling cross-smart contracts and cross-platform interoperability. Alongside 78 industry leaders, we proudly support the non-profit ERC3643 Association in improving this open market standard. Together, we can drive lasting impact through collaboration.

So, what do you think Tokenization 4.0 will look like?

Tokeny Spotlight

PARTNERSHIP

Tokeny is integrating Chainlink Labs infrastructure within our solutions.

Read More

INTERVIEW

 CCO, Daniel Coheur interview at the iconic NYSE to discus buy-side trends.

Read More

EVENT

Attended DAW with one clear message: The strong need for an interoperable ecosystem built on shared standards.

Read More

PARTNERSHIP

Tokeny partners with AMA-AMBIOGEO to tokenize $4.6 Billion Gold Reserves.

Read More

PRODUCT NEWSLETTER

We discuss how our platform empowers fund servicers to act in onchain finance.

Read More

NEW TEAM MEMBER

Meet Jordi Reig our new Head of Engineering. Welcome to the team

Read More Tokeny Events

RWA Summit New York
October 22nd – 23rd, 2024 | 🇺🇸 USA

Register Now

Smartcon
October 30th – 31st, 2024 | 🇭🇰 Hong Kong

Register Now

Fintech Festival Singapore
November 6th – 8th, 2024 | 🇸🇬 Singapore

Register Now

The Digital Money Event
October 23rd, 2024 | 🇬🇧 United Kingdom

Register Now

Digital Assets Week Singapore
November 4th – 5th, 2024 | 🇸🇬 Singapore

Register Now ERC3643 Association Recap

Press Release

ERC3643 Association Leads RWA Tokenization Standardization with 78 Industry Leaders.

Learn more here

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Oct21 Institutional Tokenization 3.0: Break Silos October 2024 Institutional Tokenization 3.0: Break Silos Since Tokeny started building tokenization solutions in 2017, we have seen financial institutions exploring tokenization of assets in… Sep6 Amsterdam Teambuilding Fuels Our Mission for Open Finance May 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance Greetings from Amsterdam! We hope you had a wonderful summer holiday. Recently, our global team… Aug1 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption July 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption Open finance is a new approach to financial services, characterized by decentralization, open… Jun28 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules June 2024 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules As the EU’s Markets in Crypto Assets (MiCA) regulation is set…

The post Institutional Tokenization 3.0: Break Silos appeared first on Tokeny.


uquodo

Web 3.6.1 and Mobile SDK 3.2.0 updates

The post Web 3.6.1 and Mobile SDK 3.2.0 updates appeared first on uqudo.

The post Web 3.6.1 and Mobile SDK 3.2.0 updates appeared first on uqudo.


Spherical Cow Consulting

The Importance of Digital Identity Wallet Standards

Digital identity wallets are too important to be treated as just an app, with only your favorite app store’s guidelines. I also think it’s too important to solely rely on a mess of government guidelines written with varying degrees of clarity. We already have so many questions, like: Answers in Regulation? I wrote a post… Continue reading The Importance of Digital Identity Wallet Standards The p

Digital identity wallets are too important to be treated as just an app, with only your favorite app store’s guidelines. I also think it’s too important to solely rely on a mess of government guidelines written with varying degrees of clarity.

We already have so many questions, like:

How much should a wallet know about the credentials it holds? How should it make queries to access only the required information? How can users find out which credentials are stored in each wallet, and how should wallets communicate what they contain? Should wallets be able to interoperate to help users find and share entire credentials or just specific details? Answers in Regulation?

I wrote a post about the EU’s Digital Identity Architecture Reference Framework (ARF) a few months ago. The ARF is probably the best source of guidance right now, given its comprehensive approach and the level of collaboration involved. It outlines the expected behaviors of a digital identity wallet, with its development and use supported by the eIDAS 2.0 regulation.

“The ARF is an outline that provides the first blush of a framework for how digital wallets will work in the EU. The European Commission kicked off the work through a Commission Recommendation from June 2021 that urged Member States to develop common standards, technical specifications, and best practices in response to the eIDAS 2.0 regulation. EU Member States sent their experts to join a collaborative process to build the framework.” – The EU Digital Identity Architecture Reference Framework – How to Get There From Here

All that said, ARFs are not specifications. They describe a design, but the details of the implementation, such as what, when, and how different protocols must be used, are left open to interpretation. ARFs lay the groundwork for building specific technical standards, guiding more detailed development. Incredibly helpful, but not enough by itself to help ensure clarity and interoperability.

Answers in Open-Source Libraries

OK, so it’s good that there is a reference framework under development. For that matter, there is also at least one effort to build and share code libraries that will support the development of digital identity wallets: the Open Wallet Foundation.

From their website, “The OWF aims to set best practices for digital wallet technology through collaboration on standards-based OSS components that issuers, wallet providers and relying parties can use to bootstrap implementations that preserve user choice, security and privacy.”

They’ve also recently published a “Wallet Safety Guide” that provides guidelines to developers on safe ways to implement a wallet. The guide offers four areas of focus, or ‘pillars’: Privacy, Security, Supporting Functions, and Governance. It is an incredibly helpful guide and, when combined with the code libraries their members are developing and making available, takes developers one step closer to creating a wallet that is fit-for-purpose and safe for users.

But, similar to an ARF, these aren’t specifications. There are still low-level details that need clarity. Do we have the right protocols that allow all the components of a digital identity wallet to share, in a controlled manner, data about itself and the credentials it contains?

Answers in Specifications

But surely there must be something in the standards world that applies to wallets! Well, sort of. There are quite a few specifications about credentials and their properties (for example, OpenID for Verifiable Presentation that focuses on presenting identity data securely, or the W3C’s Verifiable Credentials Data Model that defines a standard for digital credentials, and of course the work in the IETF SPICE working group.) Some of these are approved standards, some are works in progress.

One of the works in progress is the Digital Credentials API within the W3C. This API aims to create a standardized way for web browsers, wallets, and verifiers to interact, ensuring data privacy during credential exchanges. Tim Cappalli, perhaps better known for his championing of passkeys, created a great diagram that shows what specific step in the process the DC API work is focused on. It also shows where other specifications need to exist for the other steps.

Do Standards Really Matter?

So, there are frameworks. There are open-source libraries. There are credential specs and work to standardize APIs. Why isn’t that enough? Digital identity wallets are part of such a new field, surely building experience this way is a good thing!

That’s a perspective that a lot of people have, and in most situations I would agree with them. Building standards without understanding real-world use cases is an annoying academic exercise that can waste a lot of time. In this case, however, we’re talking about testing out these new ideas in a way that involve personal data. A LOT of personal data. We’re not going to get it right the first time around. So what does that mean for all that personal data? It means a high-probability of exposing that personal data to entities that shouldn’t have it.

Wrap Up

At the end of the closing keynote panel at Authenticate 2024, Andi Hindle asked “Are wallets going to be successful? Are they the right path forward?” My answer was “The question is irrelevant.” Yes, that’s a cheeky way of putting it, but digital identity wallets are already here. They are already being implemented. They are not going to go away. And they are introducing new threat vectors that we are hoping regulations will protect us from.

That’s a great model … if everyone and everything wants to abide by the law and agrees to interpret it in the same way. But for a world where that ideal is perhaps not the most realistic, having technical specifications that allow or prevent behavior in a very predictable fashion sure would be nice.

Regulation, like the EU’s eIDAS 2.0, and open-source efforts such as the Open Wallet Foundation are important steps in guiding digital identity wallets. However, we need to complement these efforts with detailed technical standards that ensure wallets operate predictably and securely. This layered approach—combining regulation, open-source libraries, and technical standards—can create a safer ecosystem for users.

So let’s get moving.

Reach out if you want to learn more about navigating this process or need support with standards development. With my experience across various SDOs, I’m here to help guide you through the complexities of Internet standards development.

The post The Importance of Digital Identity Wallet Standards appeared first on Spherical Cow Consulting.


Ockto

Verwachtingen CCD2: duidelijkheid, een gelijk speelveld en proportionaliteit

Meer duidelijkheid, werkbare proportionaliteit-eisen en een gelijkwaardig speelveld voor alle partijen. Dat is wat er verwacht en gehoopt wordt van de aankomende Consumer Credit Directive 2 (CCD2). Deze Europese richtlijn verbreedt de scope van partijen die onder toezicht vallen en brengt veranderingen met zich mee voor kredietverstrekkers. Zij moeten zich voorbereiden op strengere rege

Meer duidelijkheid, werkbare proportionaliteit-eisen en een gelijkwaardig speelveld voor alle partijen. Dat is wat er verwacht en gehoopt wordt van de aankomende Consumer Credit Directive 2 (CCD2). Deze Europese richtlijn verbreedt de scope van partijen die onder toezicht vallen en brengt veranderingen met zich mee voor kredietverstrekkers. Zij moeten zich voorbereiden op strengere regels en nieuwe eisen.


KuppingerCole

Analyst's View: Endpoint Protection, Detection and Response (EPDR)

by John Tolbert In the rapidly evolving landscape of cybersecurity threats, Endpoint Protection Detection and Response (EPDR) solutions are, without a doubt, indispensable components of an organization's security architecture. EPDR solutions bridge this critical gap by integrating proactive endpoint protection with advanced detection and real-time response capabilities. This unified approach enabl

by John Tolbert

In the rapidly evolving landscape of cybersecurity threats, Endpoint Protection Detection and Response (EPDR) solutions are, without a doubt, indispensable components of an organization's security architecture. EPDR solutions bridge this critical gap by integrating proactive endpoint protection with advanced detection and real-time response capabilities. This unified approach enables organizations to not only prevent known malware infections but also to swiftly identify, analyze, and mitigate complex threats that can evade conventional defenses. As endpoints—ranging from desktops and laptops to mobile devices—serve as pivotal entry points for cyber adversaries, incorporating EPDR ensures comprehensive visibility and robust protection across the entire digital workspace, thereby fortifying the organization's overall security posture.

Lockstep

Back to the Future with Verifiable Credentials

I recently co-authored a white paper about verifiable credentials, with the founder and CEO of the exciting Australian start-up, Verified Orchestration. “Back to the Future — Revolutionising Digital ID with new technology and centuries old governance” (PDF) looks at how verifiable credentials enable whole ecosystems to digitise their established governance structures, contexts and rules, as... T

I recently co-authored a white paper about verifiable credentials, with the founder and CEO of the exciting Australian start-up, Verified Orchestration.

Back to the Future — Revolutionising Digital ID with new technology and centuries old governance” (PDF) looks at how verifiable credentials enable whole ecosystems to digitise their established governance structures, contexts and rules, as well as their transactions.

This blog is a lightly edited extract from our full paper.

Photo credit: Sean Bernard, Flickr (Creative Commons Licence). 

Verifiable Credentials are new again

Verifiable Credentials have been around for a long time. You have more than likely used Verifiable Credentials without knowing it. They are commonplace, embedded in mobile phones, payment cards, e-passports, smart phones and smart watches.

The mobile phone SIM is an early example and provides a perfect explainer. The Subscriber Identification Module is both a special purpose integrated circuit and an administrative record. The SIM holds an official copy of your account information and your unique international subscriber number, all of which is digitally signed by your phone company.

The SIM also holds a unique cryptographic key which is used by the handset to digitally sign (in simple terms, “mark”) the start and stop of every call you make. This signature is verifiable by network operators globally and allows them to know which subscriber is making which call from what location, anywhere in the world.

The global cell phone network could not function without Verifiable Credentials.

The same goes for global credit card payments. The EMV chip card system replaced magnetic stripe cards long ago, which we were vulnerable to skimming and counterfeiting. Instead of a magnetic stripe card storing and passively transferring cardholder data to a terminal, the chip card carries a Verifiable Credential holding the cardholder data, a cardholder key, and the signature (i.e. endorsement) of the bank which issued the card. Every payment made with the chip card is signed (marked) by the cardholder key, rendering it tamper resistant and globally reconcilable.

Verifiable Credentials are a technology that puts instrumental pieces of information about individuals into the hands of those individuals and empowers them to present that information directly, purposefully and securely.

Verifiable Credentials are decentralised in that the information they carry is valid on its face and can be presented directly, peer to peer, without intermediation.

Verifiable Credentials and the identity problem

While Verifiable Credentials have been used for decades, they have been reenergised lately to help solve digital identity. SIMs and EMV cards are highly specialised, dedicated to singular applications, with proprietary standards overseen by industry associations, and bound to physical chips. Today, Verifiable Credentials are being standardised by several global working groups, with a view to extended use cases and applications.

Why the shift to Verifiable Credentials? The way we handle most identity information online has historically followed a distinctly centralised pattern. Instead of putting identity information in the hands the holder, we tend to keep ostensibly official copies in different servers where it sits waiting to be exercised on the holder’s behalf.

To put their digital identity to use, the holder has to activate it on the server somehow (usually by quoting a plaintext username and password) triggering a cascade of actions in their name. Internet banking, online shopping, remote workflows, e-health, e-government travel booking, ticketing and so on all follow the same pattern.

Centralised identity management is odd compared with regular credentials. Imagine if we handled driver’s licenses in the same way as current online identity: the motor vehicle registry would ask you to give your license back to them, and in its place issue you a username and password to access it and release it whenever you happen to need it.

The online world has followed this unreal pattern ever since the “Identity Metasystem” was published in 2006, promoting the canonical arrangement where a Subject and a Relying Party deal with each other via a third-party Identity Provider.

The three-party model is entirely reasonable with respect to the way authoritative information about parties is sourced, however the Identity Metasystem also dictated that most interactions would draw down identity information in real time. That’s the odd part of digital identity.

The new wave of interest in Verifiable Credentials crystalised in July 2018 when the World Wide Web consortium (W3C) released the Verifiable Credentials Data Model 1.0 with the byline Expressing verifiable information on the Web.

[For some reason, subsequent iterations of the W3C VC Data Model dropped the mention of “verifiable information”. I thought that was the best thing in the specification.] Back to the future

Verifiable Credentials are a revolutionary digital technology, placing cryptographic keys under the sole control of the credential holder, making credentials highly resistant to theft, counterfeiting or takeover. The new wave of standards now allows customised Verifiable Credentials to be securely carried in mobile digital wallets and used in a range of business applications to reliably prove endorsed facts and figures in their specific contexts.

By decentralising the presentation of credentials, and conserving the established local rules that govern how they are issued and consumed, cryptographically Verifiable Credentials are far less disturbing to business processes than general purpose digital identities and the centralised presentation flows entailed by the Identity Metasystem.

The post Back to the Future with Verifiable Credentials appeared first on Lockstep.


Datarella

Orchestration Systems in Track & Trust

This is the fourth article in a series of technical posts about how Track & Trust works at a component level. To begin with, we’ll outline how our orchestration systems, […] The post Orchestration Systems in Track & Trust appeared first on DATARELLA.

This is the fourth article in a series of technical posts about how Track & Trust works at a component level. To begin with, we’ll outline how our orchestration systems, real-time monitoring, and dashboards work together. Additionally, we’ll explore the challenges we faced and how we overcame them. Quick navigation links to follow-up articles will be provided at the bottom of each article once the series is complete.

Orchestration Systems and CI/CD

To manage a large fleet of custom-built mesh node devices, we needed to develop advanced orchestration systems. Specifically, these systems enable us to provision and manage devices efficiently. Furthermore, we created a special approach to real-time monitoring of node health in the field. As a result, Track & Trust includes a full suite of dashboards that we can now use to monitor key performance indicators and display the outputs of our Probabilistic 360° Supply Chain Tracking product. In addition the orchestration systems we built are now fully operational and enable a highly flexible approach to updating and managing the software deployed to our hardware in the field.  Let’s jump into how we accomplished this feat.

The Addressing Challenge

Most people aren’t aware of this but devices on 4g connections don’t have static IP addresses. The IP addresses are assigned by the constantly shifting cellular towers the mobile device connects to. This is a real problem if you want to set up a software pipeline to trigger updates on mobile or iOT devices. In order to solve this we set up a virtual private network (VPN). This VPN is based on the open source wireguard protocol. Basically it’s a software defined network with tailscale under the hood. This approach means using a peer-to-peer mesh network to handle addressing devices inside our mesh network (pretty meta huh?). By routing our network traffic through a VPN we achieved much better security. On top of this we got static virtual addresses. This allowed us to name and manage the machines at the network level.

Push or Pull Orchestration Systems?

With the addressing problem solved another challenge popped up. If the machines are only online intermittently, a push approach to updates becomes impossible. This is because when you push the updates it might not reach all the machines. Some machines will inevitably be offline. The solution to this issue was to use a scheduled automation to automatically pull updates from an ansible automation engine. This, in turn, is controlled by a continuous integration and deployment system based around Semaphore. This enabled us to write code in an integrated development environment, push it to gitlab, and then trigger a build that the machines pick up. These builds, then deploy automatically on a daily basis whenever the machines happen to come online.

While we were still heavily in development, having this pipeline in place vastly increased our efficiency. We were able to write code and deploy to our custom made IoT hardware basically as though it was sitting in a cloud environment. On top of this we were able to designate groups of machines as dev machines and others as stage or prod machines.  This combination allowed us to develop and test both hardware and software independently of production and staging environments. It empowered us to rapidly iterate on the status quo without breaking hardware already in use in the field. Additionally the moment that we were ready to update mesh nodes in the field, we could earmark them to update themselves with well tested code the next time they came online.

Real-Time Monitoring

We needed advanced monitoring to easily update our software fleet. To achieve this, we set up an end-to-end observability pipeline using Fluentbit. This pipeline routed data in real-time from our mesh nodes into a database. Subsequently, we displayed real-time data in Grafana for management purposes. This approach enabled us to debug faster without having to SSH into a specific node to get its logs.

Finally, our Grafana dashboards showed us if all services were up and running, as well as key indicators of device health such as memory usage, temperature, and battery life. We could display logs in the timeframes we were interested in for the machine groups we wanted to monitor. In conclusion, this monitoring technology gave us valuable insights into ensuring our deployed hardware was working correctly and allowed us to fix issues quickly.

The Track & Trust dashboard with realtime information about each machine to optimize field operations

<<Previous Post

Next Post>>

 

The post Orchestration Systems in Track & Trust appeared first on DATARELLA.


KuppingerCole

Tackling AI-Driven Cyber Risks: A Look at New Security Regulations

by Prof. Dr. Dennis-Kenji Kipker As artificial intelligence continues to evolve, so do the cybersecurity challenges it brings. AI-enabled cyber threats are opening up new attack vectors, posing significant risks to organizations across industries. At cyberevolution 2024, Dennis-Kenji Kipker, Research Director at cyberintelligence.institute, will address these pressing concerns in his keynote. D

by Prof. Dr. Dennis-Kenji Kipker

As artificial intelligence continues to evolve, so do the cybersecurity challenges it brings. AI-enabled cyber threats are opening up new attack vectors, posing significant risks to organizations across industries. At cyberevolution 2024, Dennis-Kenji Kipker, Research Director at cyberintelligence.institute, will address these pressing concerns in his keynote.

Dennis will highlight the growing complexity of AI-related risks and explain why relying solely on regulations like the recently enacted EU AI Act won't be enough to protect against emerging threats. Instead, he advocates for a more proactive approach, using advanced AI-driven tools for continuous monitoring and threat detection. His session will explore how organizations can build resilient defenses by integrating AI technology into their cybersecurity strategies, while ensuring that new regulatory frameworks such as the NIS2 Directive and Cyber Resilience Act work in harmony with these technological advancements.

For professionals navigating the intersection of AI, cybersecurity, and regulation, Dennis' insights will be invaluable in understanding how to mitigate risks and stay ahead of potential threats. In the meantime, watch our interview with him to get insights into how these emerging regulations are designed to keep up with evolving cyber threats and ensure a safer digital future.

Sunday, 20. October 2024

KuppingerCole

Identity Management in a World of Automated Systems: Machine Identities

In this conversation, Matthias and Martin explore the concept of machine identities, discussing their significance in modern IT infrastructures. They discuss the challenges of managing these identities, the importance of lifecycle management, and the impact of regulations on cybersecurity. The conversation emphasizes the need for organizations to understand and properly manage machine identities t

In this conversation, Matthias and Martin explore the concept of machine identities, discussing their significance in modern IT infrastructures. They discuss the challenges of managing these identities, the importance of lifecycle management, and the impact of regulations on cybersecurity. The conversation emphasizes the need for organizations to understand and properly manage machine identities to ensure security and compliance in an increasingly complex digital landscape.



Friday, 18. October 2024

1Kosmos BlockID

MGM, Caesars Hacks: More of the Same Is Coming Your Way–But Here’s How to Stop It

Given the stunning success of the recent hacks at MGM and Caesars, it’s a safe bet what happened in Vegas won’t stay there for long. Even though technology to prevent such breaches is readily available, there’s every reason to believe large organizations in any number of sectors could soon face a rude awakening. Success breeds … Continued The post MGM, Caesars Hacks: More of the Same Is Coming Y

Given the stunning success of the recent hacks at MGM and Caesars, it’s a safe bet what happened in Vegas won’t stay there for long. Even though technology to prevent such breaches is readily available, there’s every reason to believe large organizations in any number of sectors could soon face a rude awakening.

Success breeds success, after all. It also inspires copycats. The attacks on Caesars Entertainment and MGM Resorts International in early September appear to have been perpetrated by a group of teenagers and young adults that employs simple social engineering techniques to infiltrate corporate systems for fun and serious profit.

Dubbed “Scattered Spider” by some security analysts and UNC3944 or “Muddled Libra” by others, the group of Gen-Z threat actors is believed to have pulled off a series of cryptocurrency heists before breaching and then extorting Western Digital and other technology firms over the past few years. Reuters reports the group has been implicated in 52 attacks spanning multiple industries worldwide since 2022.

Specifics in the casino breaches are still emerging. However, it appears that operatives in the MGM attack used LinkedIn profile information to impersonate a resort employee in “vishing” calls to an outsourced IT support vendor, requesting access to the employee’s corporate accounts after getting “accidentally” locked out. After gaining entry, the hackers gained super administrator rights to MGM’s Okta environment. They even configured a second identity provider to bypass multi-factor authentication (MFA) and impersonate highly privileged users within the corporate systems.

In a word: diabolical, especially for a group of suspected 17- to 22-year-olds. But as the Washington Post reports, Scattered Spider’s Vegas jackpot also represents a troubling new escalation in the group’s MO. The hackers threw the company into utter chaos by deploying crippling ransomware from notorious Russian cyber gang ALPHV into MGM’s systems. Ten days into the breach, MGM was still struggling to repair corporate email, restaurant reservation systems, hotel booking operations, slot machines, and digital keycard access at its Aria, Bellagio, and MGM Grand properties. There’s little reason to believe Scattered Spider isn’t already scouting new prey.

Gaming the System: Harvesting Passwords, Short-Circuiting MFA

Ransomware attacks are nothing new, of course. Last year, more than 620 million ransomware attacks worldwide cost victims more than $30 billion. According to Verizon’s 2023 Data Breach Investigations Report, 74% of breaches stem from credentials stolen through phishing, vishing, and SIM-swapping attacks.

Indeed, stolen passwords are implicated in up to $25 million in average losses suffered by a third of all businesses that have fallen victim to cyberattacks over the last 36 months. When an ATO leads to a data breach, it can mean an average additional cost of $9.5 million per incident for US-based companies. ATOs account for more than $300 million in losses annually. And as WAPO points out, Scattered Spider and its Eastern European business partners could worsen matters in coming weeks.

For one thing, you have financially motivated, English-speaking hackers with a proven talent for pulling off social engineering and data exfiltration schemes. Now add the Russian “ransomware-as-a-service” operatives believed to be behind the Colonial Pipeline attack and an underworld network as technologically sophisticated as any modern enterprise.

Mix in plentiful targets with outsourced IT support and call center operations crewed by untrained, often short-term employees vulnerable to vishing. And sprinkle in emerging, AI-powered phishing and vishing tactics and automated credentials-stuffing technologies. Put it together, and far too many organizations in health care, telecom, government, financial services, and others may be vulnerable to an emboldened Scattered Spider and copycat groups. The good news: Organizations can quickly deploy effective defenses. But they’d better move fast.

No More Rolling the Dice with Outdated Forms of MFA

According to a recent survey from Google and Ipsos, a successful data breach can erode customer trust by as much as 44%. As the MGM and Caesars breaches so vividly illustrate, legacy forms of multifactor authentication (MFA) won’t cut it anymore. Cybercriminal organizations like Scattered Spider have clearly developed inventive ways to acquire login credentials and circumvent things like one-time passcodes and limited biometric authentication systems designed to confirm the legitimate user is attempting to access their account.

The problem: Traditional forms of MFA are built around login passwords and a device instead of the identity of the person accessing an account. Even with Windows Hello for Business (WHfB) and Okta Verify Authenticator, anybody with administrative access can register things like user biometrics to any device they can access—or set up an alternative identity provider to bypass authentication measures altogether.

For some business applications, that may not be a significant risk. But it still leaves the door open to account compromise that puts IT and security teams in reactive mode against data breaches and ransomware after access to systems has already been granted. Fortunately, a new generation of strong, non-phishable biometric identity solutions is changing all that.

Enter: “Liveness”-based Biometric Authentication

With traditional forms of MFA becoming so unreliable as a means of identity verification, modern forms of biometric authentication are helping to set a new standard for security and convenience. Solutions certified to FIDO2 , iBeta biometrics-, and NIST 800-63-3 standards, for instance, use “live” biometric markers tied to a registered identity to provide reliable, strong authentication impervious to account takeover.

These modern biometric solutions offer machine-verified identity to government-issued credentials (driver’s license, state ID, passport, etc.) and enable non-phishable multi-factor authentication when users login to digital services.

1Kosmos, for instance, uses the private key of a matched public-private pair in the user’s device as a possession factor (ie, “what you have”), while a live facial scan becomes the “what you are” or inherent authentication element. To access a site, app, or system, a live image scan is compared to an image scan captured at the time of enrollment. If they match, the identity of the person of authentication is confirmed to be in fact, the authorized user—and not a bot, deep fake or imposter—with 99.9% accuracy.

This technology is widely available and supports a consistent onboarding and authentication experience into all apps, devices, systems, and environments—including existing privileged access management systems. Any organization can stop phishing, ransomware attacks, and data breaches before hackers can infiltrate accounts. Scattered Spider simply provided an urgent new reason to stop gambling with security now.

To learn more about 1Kosmos, the only NIST, FIDO2, and iBeta biometrics-certified platform on the market, click here.

The post MGM, Caesars Hacks: More of the Same Is Coming Your Way–But Here’s How to Stop It appeared first on 1Kosmos.


Spruce Systems

What’s the Difference Between a Physical ID Card and a Verifiable Digital Credential?

Individuals are starting to embrace digital identity by replacing physical wallets with smartphone-stored verifiable digital credentials, offering enhanced privacy, security, and convenience.

Gen Z are giving up their wallets – gladly. 

According to a recent New York Times report, teenagers and twentysomethings think wallets are “uncool.” Instead, they’re increasingly storing every payment method, document, or credential they need on their smartphones: credit cards, plane tickets, insurance cards, transit passes, driver’s licenses, and gym memberships.

Leaving home without a wallet might sound terrifying, but it’s quickly becoming the new normal. Digital driver’s licenses, now available in states like New York and California, are poised to spread nationwide. Businesses and agencies that don’t get up to speed on the new digital identity can risk being left behind: one 19-year-old told the Times that if a store doesn’t accept Apple Pay, she “won’t give them my business.” 

You might assume these verifiable digital credentials are just photos of conventional documents or plastic ID cards. But behind the curtain, there’s a lot more going on, involving advanced cryptography and hardware. While early adopters may care most about the convenience of carrying one less thing, the real point of digital identity is that it’s more private, works better online, and can’t be faked as easily as physical ID. 

So what are these digital cards, really – and how do they work? How are they different from physical ID or credit cards? 

Most importantly, if they’re just files on a smartphone, why are they trustworthy?

The Basics of Digital ID Technology

At the most basic level, a verifiable digital credential is not an image, but a string of numbers. They rely upon cryptographic signatures that can be protected by a chip in your smartphone called a “secure element.” This digital ‘signature’ is unique to the credential issuer – for example, all mobile driver’s licenses are digitally signed by a state’s Department of Motor Vehicles.

These ‘digital signatures’ aren’t simply copies of an image of a human signature. Instead, they’re unique alphanumeric identifiers that confirm a document’s authentic source. Thanks to nearly strong encryption methods, these signatures can’t be reproduced or impersonated by another entity. 

A verifiable digital credential can be checked in various ways by a verifier, such as a rental agent or traffic cop. In many cases, a verifier will already have a record of an issuer’s public signature (that is, an encrypted string of numbers that is uniquely tied to the issuer alone), and will be able to confirm a credential’s authenticity without pinging back to a centralized server. This is significant because it can reduce the digital ‘trail’ left behind when a credential is checked, and that trail is one notable privacy risk of this new all-digital system.

A physical credential uses quite different techniques to prove its authenticity. Physical anti-fraud measures including micro-printing, holograms, and see-through panels are the first line of defense against fakes. These physical measures work well enough for low-stakes conventional applications, like proving your age to buy alcohol. 

Physical Credential

Verifiable Digital Credential

Secured by holograms, bar codes, and databases

Secured by unique encrypted signatures

No batteries required

Requires at least some device power

Reveals all printed information when presented

Allows Selective Disclosure

Easily spoofed online

Secure for online use

Requires “phoning home” for full verification

Often verifiable without “phoning home”

Reissued every few years

Reissued regularly

Can be faked using AI

Requires physical infiltration to fake

But that example might highlight the problem: there are a lot of high school kids with fake IDs. Holograms and other physical security elements are a barrier, but they can be faked – whether in pursuit of underage drinking, or more nefarious goals. So in more serious face-to-face interactions, such as when you’re pulled over by a police officer, your ID may be checked remotely by sending your ID number to a central database. This incurs privacy risk since it effectively creates a record of your location or activities.

Things like holograms and microtext are particularly easy to fake when an ID is being used online. In fact, a rising wave of online identity fraud, for everything from opening bank accounts to applying for jobs, is a major motivation for the shift to digital identity. Verifiable digital credentials, unlike physical cards, are tailor-made for online use: because their confirming signatures are encrypted, they can be reliably confirmed online without the risk of being stolen or copied. 

Choosing your Data

GenZ may love leaving their wallet at home, but the biggest benefit of digital identity for most users will be having more control of your personal data – far, far more control.

A paper credential has to be handed over all at once to someone checking it, which usually means they’re getting way more information about you than they actually need. That can incur serious privacy risk, for instance if a bartender decides to take an interest in your home address.

Digital identity instead allows what’s known as “selective disclosure.” California’s mobile driver’s license is fairly typical – when the ID is checked, an app will display what information is being requested and only after authorizing the request, will the information be transmitted. 

Digital systems can also do even more surprising things with data, such as proving that you’re over 21 without disclosing your specific date of birth. These features are huge steps forward in user privacy and data control. 

Using Digital ID Offline 

Finally, you might wonder how a digital driver’s license or other credential works when your smartphone (or other device) isn’t connected to the internet. It’s increasingly rare, but there are still plenty of places and moments you just don’t have a wireless connection.

The good news is that a verifiable digital credential works just as well offline as when you’re connected to the internet. The digital signature that authenticates a credential is, again, stored directly on your device, not on a remote server. By the same token, verifiers will often already have a record of relevant issuer signatures, making it possible that they can verify your ID without an internet connection.

Different, and Mostly Better

This has been a high-level overview of some of the differences between physical and digital identity cards or other credentials. There is still much, much more going on under the surface, particularly when it comes to grasping how encryption and digital signatures work.

Hopefully it’s clear even at a glance that there are major differences between digital and physical credentials – including differences that will subtly change how we use and think about identification documents. Many of those differences are clear efficiencies, but a handful may make digital less convenient than paper credentials in particular ways. 

The advantages in user privacy and overall system security will hopefully make those tradeoffs worthwhile, but what’s clear is that the change is just over the horizon. If you need help navigating the new landscape, reach out to SpruceID.

Get in Touch

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Ocean Protocol

F1 Racing 2024 Strategy Analysis Challenge— Final Classification

Introduction The 2024 Formula 1 Racing Challenge provided data scientists with detailed lap-by-lap data from the current F1 season. The challenge focused on analyzing key trace elements such as tire compounds, pit stop strategies, and lap times. Provided information included telemetry data covering each race, including variables like tire choices, stint lengths, lap times, and pit stop durations.
Introduction

The 2024 Formula 1 Racing Challenge provided data scientists with detailed lap-by-lap data from the current F1 season. The challenge focused on analyzing key trace elements such as tire compounds, pit stop strategies, and lap times. Provided information included telemetry data covering each race, including variables like tire choices, stint lengths, lap times, and pit stop durations. Participants used this information to explore patterns that influenced race outcomes.

Each participant performed Exploratory Data Analysis (EDA) to uncover relationships between variables like tire degradation and race performance. They analyzed how tire compounds, such as soft, medium, and hard, impacted lap times over different stints and how teams adjusted pit stop strategies depending on track conditions. Drivers’ positions and lap times were linked to tire management, showing the importance of optimizing tire usage for each race phase.

The analysis also explored how race length influenced pit stop frequency and tire choice, with drivers using multiple compounds across various stints. The participants applied correlation analysis to measure how decisions made during pit stops impacted final race positions.

Top submissions 1st Place: Yunus and Firuze

Yunus Gümüşsoy and Firuze Simay Sezgin approached the analysis with a comprehensive view of Formula 1 race strategies, which set their report apart. Although all participants had to analyze tire performance, pit stops, and lap times, Yunus and Firuze’s analysis emphasized how different strategy elements influenced race outcomes. They focused on how teams adapted to changing race conditions, showing how decisions related to tire wear management and pit stop timing played a crucial role in shaping performance.

Their report went beyond simply presenting data on tire compounds and lap times by examining how race interruptions, such as safety car deployments, forced teams to adjust their real-time strategies. Yunus and Firuze contextualized these strategic decisions within the flow of the race, explaining how early choices like tire selection and pit stop frequency had long-term impacts. By focusing on the timing and reasoning behind these decisions, they provided more profound insights into the effectiveness of the approaches taken by teams.

The strength of their report lies in how they connected these factors into a cohesive analysis. Instead of viewing tire performance or pit stops as isolated variables, Yunus and Firuze demonstrated how these elements were interdependent and how their combined effects influenced overall race strategy. This integrated approach clarified how teams balanced short-term decisions with long-term race objectives, offering a nuanced view of race strategy that went beyond surface-level observations.

2nd Place: Luca Ordronneau

Luca Ordronneau’s report distinguished itself by focusing heavily on the variability of tire strategies across different races. His analysis centered on how teams adjusted their use of tire compounds based on specific track characteristics, such as tire degradation rates and weather conditions.By comparing tire choices across multiple races, Luca highlighted how specific teams favored soft tires for shorter stints on high-degradation tracks. In contrast, others opted for medium or hard tires for endurance on less demanding circuits.

One of Luca’s key strengths was his attention to how tire strategies shifted throughout the race. He broke down the number of laps completed on each tire compound, revealing patterns teams used to optimize performance over different stints. Luca identified how specific teams started with hard compounds for longer first stints, then switched to softer compounds later in the race when lighter fuel loads allowed for more aggressive driving. This approach demonstrated how tire strategies were not only dependent on race conditions but also on each team’s overall race plan.

Luca’s report also emphasized the relationship between pit stop frequency and race outcomes. He explored how teams that made fewer stops tended to rely on harder compounds to minimize the time in the pits, while teams that aimed for faster lap times through more frequent stops focused on softer tires. This analysis provided insights into how strategic decisions about tire and pit stop management could either gain or lose time depending on the specific demands of each race, offering a clear view of how tire selection was tailored to maximize performance across varying conditions.

3rd Place: Maria Nacu

Maria Nacu’s report stood out through its detailed exploration of how tire choices and pit stop frequency influenced race positions. She focused on how teams managed stints, analyzing the number of laps completed on different tire compounds and the impact this had on race performance. Maria’s approach highlighted the importance of balancing tire wear with performance, showing how teams that favored medium and hard tires could maintain consistent lap times across longer stints.

A significant aspect of Maria’s analysis was her focus on how tire compound choices affected race outcomes under varying conditions. She examined how drivers adapted their strategies based on the track layout and weather conditions, identifying that some teams used more aggressive strategies with soft tires during shorter stints. In contrast, others relied on harder compounds for longer, more stable performance. Her insights into tire usage under wet and dry conditions provided a comprehensive view of how teams adjusted their approaches during unpredictable races.

Maria also paid close attention to the relationship between pit stop timing and final race standings. She analyzed how teams that timed their stops efficiently gained significant advantages, particularly when pit stops coincided with race interruptions or safety cars. By demonstrating how teams balanced the need for fresh tires with minimizing time lost in the pits, Maria’s report clearly explained how pit stop strategies influenced race outcomes, tying together tire management and race pacing in a cohesive way.

Interesting Facts

In the 2024 season, teams like Red Bull Racing and Aston Martin heavily favored medium tire compounds. These compounds provided a balance between speed and durability, making them a popular choice across most circuits.

Drivers who opted for aggressive tire strategies, particularly using softer compounds early in the race, often faced significant drops in performance by the final stints due to faster tire degradation.

An analysis of the Monaco Grand Prix revealed a robust correlation between drivers’ starting positions and their final race positions, highlighting the critical importance of qualifying performance on tight circuits.

Teams that used fewer pit stops but timed them efficiently, especially during critical race phases, often finished in higher positions, highlighting the importance of minimizing pit stop frequency while maintaining tire performance.

In races with higher tire wear, teams that strategically switched to medium tires during mid-race stints managed to maintain more consistent lap times. In contrast, teams that delayed tire changes experienced significant performance drops toward the end of the race.

2024 Championship

Our challenges offer prize pools from $10,000 to $20,000, distributed among the top 10 participants. Our points system for the championship allocates between 100 and 200 points to the top 10 finishers in each challenge, with each point valued at $100. Participants accumulate these points toward the 2024 Championship. Last year, the top 10 champions received an additional $10 for each point they had earned.

2024 Championship standings prior to the F1 Racing 2024 Strategy Analysis challenge

Additionally, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists maintain their intellectual property rights while we provide support in monetizing their innovations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

F1 Racing 2024 Strategy Analysis Challenge— Final Classification was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Nov 11, 2024: Overcoming Stress and Building Resilience in a High-Stakes Environment

In this webinar on Mental Health in Cybersecurity, we'll explore the unique challenges faced by professionals in this high-stakes field. As cyber threats continue to evolve and intensify, so does the pressure on those tasked with defending against them. This constant state of vigilance, coupled with the potential catastrophic consequences of a breach, creates an environment ripe for stress, burnout
In this webinar on Mental Health in Cybersecurity, we'll explore the unique challenges faced by professionals in this high-stakes field. As cyber threats continue to evolve and intensify, so does the pressure on those tasked with defending against them. This constant state of vigilance, coupled with the potential catastrophic consequences of a breach, creates an environment ripe for stress, burnout, and other mental health issues.

Thursday, 17. October 2024

KuppingerCole

IAM meets ITDR: A Recipe for Robust Cybersecurity Posture

In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information. Modern technology offers a solution

In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information.

Modern technology offers a solution through Identity Threat Detection and Response (ITDR) tools. By integrating ITDR with Identity and Access Management (IAM) systems, organizations can effectively identify anomalies and remediate risks. This integration helps build a preemptive security posture, reducing the attack surface and aligning with Zero Trust principles.

John Tolbert, Director of Cybersecurity Research at KuppingerCole, will define the technical requirements for ITDR solutions. He will discuss how ITDR can enhance threat detection, support Zero Trust initiatives, and fortify perimeter security against identity-based threats.

Harshvardhan Lale, VP of Business Development, will delve into ARCON's ITDR engine. He will illustrate its role in detecting, remediating, and responding to identity threats. Additionally, he will highlight how ARCON's solutions secure sensitive information and reduce attack vectors, providing a robust cybersecurity posture.




HYPR

Microsoft’s SFI Offers a Blueprint for Identity Security

A few weeks ago, Microsoft issued its first Secure Future Initiative Progress Report. Launched in November 2023, the Secure Future Initiative (SFI) is Microsoft’s acknowledgement that it needs to drastically improve its cloud security posture and make cybersecurity its top priority. The company has dedicated a substantial chunk of its engineering workforce to the effort ”to address the

A few weeks ago, Microsoft issued its first Secure Future Initiative Progress Report. Launched in November 2023, the Secure Future Initiative (SFI) is Microsoft’s acknowledgement that it needs to drastically improve its cloud security posture and make cybersecurity its top priority. The company has dedicated a substantial chunk of its engineering workforce to the effort ”to address the increasing scale, speed, and sophistication of cyberattacks.” In line with this mandate, a key area of focus is the protection of identities and secrets.

Identity security continues to be the weakest link in the cyber defenses of the vast majority of organizations. According to recent research, over three-quarters of companies have been hit by identity-related attacks and 69% were breached through authentication processes. One of the main incidents prompting the formation of the SFI was an attack campaign by Storm-0558, in which the threat group used a stolen key that creates multi-factor authentication codes to break into the Microsoft 365 accounts of more than 25 organizations, including government agencies.

It’s encouraging to see that identity security places prominently in the progress report, with several key areas noting improvement. 

Phishing-Resistant Authentication as a Baseline

As an organization, Microsoft is making the move to phishing-resistant authentication, using passkeys or certificate-based authentication. They started with their production environment and are in the process of adoption and enforcement across all users in the productivity environment. This is exactly the right approach — tackle your most critical/at-risk systems first and roll out to the broader workforce in stages.

A phased method also fosters user acceptance. We’ve seen time and time again that passkey adoption gains a momentum of its own when users hear from their peers that it actually makes login faster and simpler. 

New Critical Control: Video-Based Verification

Perhaps the most interesting aspect of their identity security overhaul is how they now handle credential recovery situations. Although the industry widely acknowledges that knowledge-based factors are insufficient for verifying employees' identities, many organizations have yet to take action. Microsoft, however, has shifted to using video calls for user verification, aligning with the NIST 800-63-4 identity proofing guidelines for IAL2. Currently in its second public draft, NIST 800-63-4 provides important updates from 800-63-3 to combat modern identity threats leveraging new technologies and best practices.

By forcing video verification for credential recovery, Microsoft now effectively shuts down one of the fastest growing attack vectors, help desk social engineering. The $100 million dollar attack on MGM Resorts occurred when hackers convinced help desk personnel to reset an employee’s credentials to grant them access.

Here’s an example of what such a video-based credential recovery process could look like.

Improving Secrets Management

While authentication is a critical part of identity security, secrets management is just as important. Secrets — like API keys, encryption keys, and access tokens — are often the target of sophisticated threat actors. Large organizations like Microsoft, which deliver many services, often struggle with secrets management. The problem is only growing as enterprises are building more applications, more quickly, aided by large language  models (LLMs). The SFI emphasizes the need to make  secrets management and security a top priority.

This includes using hardware-based protection to store secret keys and tokens, automated rotation of secrets, and increasing visibility into the context and usage of secrets. This kind of telemetry is essential to detect misuse and forgeries. According to the progress report, Microsoft has completed implementing hardware-based storage for signing keys for public and US government clouds, and has made significant headway on the remaining fronts.

The Role of Automation in Identity Security

Automation is becoming a crucial tool in the fight against identity-related attacks. In the SFI report, Microsoft outlined how they are leveraging automation to detect and respond to identity threats in real-time. From automatically identifying suspicious login behaviors to auto-locking compromised accounts, automation ensures that potential threats are addressed quickly — before they can cause significant damage.

Organizations can adopt similar strategies by integrating automated identity protection tools that monitor user activity and enforce security policies consistently across all users and systems. These tools can help reduce human error and ensure faster response times to identity threats.

Action Steps for Enterprises

Microsoft’s SFI serves as a reminder for all security and engineering teams to uphold rigorous security standards and adhere to the latest industry best practices. In particular, organizations should take heed and prioritize identity protection in their own security roadmaps.

For businesses looking to take a page from Microsoft's playbook, here are a few key actions to consider:

Adopt Phishing-Resistant MFA: Transition away from traditional authentication methods, such as passwords and SMS-based MFA, to phishing-resistant options like passkeys (synced or device-bound) or certificate-based authentication. Implement Video-Based Verification: Follow Microsoft's lead and consider adopting video-based identity verification for critical credential recovery processes to combat social engineering threats. Leverage Automation: Use automated tools for identity verification, risk mitigation, and response to suspicious activity. Automation can act as a force multiplier for your security team, catching threats they may otherwise miss. Enhance Secrets Management: Ensure that secrets like API keys and access tokens are stored securely, regularly rotated, and closely monitored.

HYPR’s Identity Assurance platform combines phishing-resistant MFA, automated identity verification and real-time identity risk mitigation to combat today’s threats as well as those to come. It’s built to fit seamlessly into your current identity stack, whether that’s Microsoft Entra ID or another provider. To learn more, arrange a demo tailored to your environment and use cases.


KuppingerCole

Guidance on Implementing Verifiable Credential Issuance

by Anne Bailey The organization interacts with many users including employees, customers, suppliers, and contractors. In order to flexibly and securely handle the variety of each user’s digital journey in an interoperable way, organizations must shift to more user-controlled methods. OpenID for Verifiable Credential Issuance (OID4VCI) is an emerging standard that bridges the gap between a new mode

by Anne Bailey

The organization interacts with many users including employees, customers, suppliers, and contractors. In order to flexibly and securely handle the variety of each user’s digital journey in an interoperable way, organizations must shift to more user-controlled methods. OpenID for Verifiable Credential Issuance (OID4VCI) is an emerging standard that bridges the gap between a new model of digital identity interaction and the known and often already implemented standards. The scope of this paper is to provide guidance to Identity and Access Management (IAM) and security architects on implementing OIDC4VCI. This whitepaper provides context for the user-controlled model, separates hype from reality, and identifies early learnings for organizations that are ready to issue verifiable credentials.

UNISOT

QR Codes Unlock Real-Time, Verifiable Data with UNISOT’s DPP

Celebrating this fantastic collaboration! It’s incredibly inspiring to witness how ProfilSport and Nordic Textile are leading the way in both sustainability and transparency in the fashion industry by integrating UNISOT’s innovative Digital Product Passport (DPP) solution. The post QR Codes Unlock Real-Time, Verifiable Data with UNISOT’s DPP appeared first on UNISOT.
QR CODES UNLOCK REAL-TIME, VERIFIABLE DATA WITH UNISOT’S DPP

Celebrating this fantastic collaboration! It’s incredibly inspiring to witness how ProfilSport and Nordic Textile are leading the way in both sustainability and transparency in the fashion industry. By integrating UNISOT’s innovative Digital Product Passport (DPP) solution, the Telenor Xtra program is not only enhancing the quality of its products but also empowering consumers like never before. This partnership sets a new benchmark for providing real-time, transparent information about every garment, ensuring that consumers have access to critical details such as material origin, production practices and environmental impact.

One of the most exciting aspects of this collaboration is that the QR code on each garment doesn’t just lead to a static, pre-prepared website. Instead, it links directly to the actual, original data stored on the Enterprise Blockchain, offering dynamic, verifiable information that is continually updated throughout the product’s lifecycle. This gives consumers unprecedented access to live data about the products they purchase, building trust and confidence in the brand while ensuring transparency at every stage.

With the focus on 100% recyclable fabric, this initiative isn’t just about creating high-quality sportswear—it’s about embracing a future where sustainability and accountability are at the forefront of every decision. The ability for consumers to scan a QR code and instantly access a garment’s DPP creates a level of engagement and trust that is unmatched. This transparency encourages informed consumer choices and elevates brand loyalty by aligning with the growing demand for ethical and eco-friendly fashion.

The Telenor Xtra program, powered by ProfilSport and Nordic Textile, is truly setting the stage for the future of fashion. By leveraging the UNISOT Asset Traceability Platform, this collaboration is pushing the boundaries of what is possible in terms of sustainable innovation. It’s exciting to see how these brands are not only meeting regulatory requirements but exceeding expectations by offering a solution that integrates technology, traceability and consumer empowerment.

This is the future of fashion — sustainable, innovative and accountable! We are thrilled to be part of this journey, supporting a movement that prioritizes transparency and paves the way for a more responsible and connected fashion industry.

The post QR Codes Unlock Real-Time, Verifiable Data with UNISOT’s DPP appeared first on UNISOT.


KuppingerCole

NIS2 Reality Check: The Deadline Is Here – Are We Ready?

by Matthias Reinwarth Today marks a critical deadline for all EU member states: October 17, 2024, the date by which the NIS2 Directive must be transposed into national law. For some, this milestone has been met with progress and precision. For others, particularly Germany, the delay in implementation highlights a significant gap between political rhetoric and actionable cybersecurity policy. Wh

by Matthias Reinwarth

Today marks a critical deadline for all EU member states: October 17, 2024, the date by which the NIS2 Directive must be transposed into national law. For some, this milestone has been met with progress and precision. For others, particularly Germany, the delay in implementation highlights a significant gap between political rhetoric and actionable cybersecurity policy.

Why NIS2 Matters

The NIS2 Directive is designed to strengthen cybersecurity across the European Union by establishing a uniform baseline of security measures, focusing on critical infrastructure, incident reporting, and cross-border coordination. The Directive itself is a powerful tool, but there’s a catch: it requires individual member states to translate its provisions into national law, a process that leaves room for delays and inconsistencies. Had it been passed as a regulation; its immediate applicability would have ensured more streamlined and consistent compliance. But as it stands, the uneven pace of implementation across member states threatens to undermine its potential impact.

Germany, currently six months behind schedule, exemplifies the challenges in turning political promises into tangible action. While the cybersecurity conversation remains a popular talking point in speeches, the urgency of addressing real-world cyber risks seems underestimated. And in a world where cyberattacks are increasingly sophisticated and frequent, every delay leaves critical infrastructure more exposed.

Missing Pieces: The “Durchführungsverordnung”

As usual: I am not a lawyer, but one of the most pressing challenges for organizations preparing for NIS2 compliance is obviously the absence of detailed regulatory guidance. Some legal instruments are still missing, something like a “Durchführungsverordnung” (Implementing Regulation) as they exist on an EU level. This should provide the concrete, actionable details on how the directive’s rules are to be enforced and what specific technical standards must be met.

Such a specification is expected and needed to offer the necessary administrative and procedural details at the national level, ensuring organizations know exactly what is expected of them. In Germany, having access to such a detailed document is crucial for organizations to understand their obligations under NIS2. Without it, they just cannot develop the processes they need to comply effectively, and that puts both their operations and security posture at risk.

The Need for Well-Defined Notification Duties

A core aspect of NIS2 is the requirement for organizations to report cybersecurity incidents, especially those that threaten critical infrastructure. However, the details of what exactly constitutes a reportable incident remain unclear. This “fuzziness” in definitions means organizations could either over-report, leading to unnecessary administrative burden, or under-report, leaving serious threats unnoticed.

Beyond incident reporting, it’s essential that organizations receive timely feedback from authorities. A well-defined feedback loop allows businesses to adjust their security strategies based on emerging threats and evolving attack vectors. But, until clear guidance is issued, these processes remain underdeveloped, leaving companies unsure of how to respond to incidents and how to improve their cybersecurity posture in real-time.

Going Beyond ISO 27001: Meeting NIS2’s Requirements

Many organizations might think that being compliant with ISO 27001 or other established cybersecurity frameworks is enough. While ISO 27001 offers a strong foundation - focusing on risk management, information security, and control structures - it falls short of the specific requirements imposed by NIS2. The Directive goes further, introducing mandatory reporting obligations, sector-specific rules, and increased regulatory oversight. In short, organizations need to go beyond their traditional control frameworks to fully meet NIS2’s stringent demands.

More Than Just Technology: A Holistic Approach to Compliance

One of the most underestimated aspects of NIS2 is its focus on a holistic approach to cybersecurity. Compliance isn’t just about having the right technology in place; it’s about creating a robust framework that includes policies, processes, organizational structure, and people. Each of these elements plays a crucial role in ensuring that an organization can not only prevent incidents but respond effectively when they occur.

Policies: Clear and enforceable security policies are the foundation of any cybersecurity strategy. These policies need to be aligned with both the organization’s goals and regulatory demands, providing a formal framework that governs the use of technologies and the response to incidents. Processes: Incident response, risk assessments, and continuous monitoring must be integrated into daily operations. These processes define how threats are detected, reported, and mitigated, ensuring that organizations are prepared to meet NIS2’s strict reporting timelines. Organizational Structure: Cybersecurity efforts must be coordinated across the entire organization. This includes having clear governance structures, with defined roles for key personnel such as the CISO, compliance officers, and dedicated security teams. People: Human error is often the weakest link in cybersecurity. NIS2 emphasizes the need for regular training and awareness programs, ensuring that all employees - not just IT staff - are aware of the risks and know how to respond to threats. The Clock Is Ticking

Despite the delays in many EU member states, the urgency to act is real. Organizations that have not yet begun their compliance journey are at significant risk, and even those that are somewhat prepared still face challenges in aligning with the directive’s requirements. Waiting for final regulations to be fully in place is not an option - time is running out, and achieving compliance will require significant time, effort, and resources.

KuppingerCole Analysts are well-equipped to assist organizations on their journey to compliance and cybersecurity maturity. Our advisory team brings extensive experience in supporting clients through complex cybersecurity initiatives, and we’ve already laid significant groundwork in the areas of ISO 27001 and TISAX certifications, helping businesses strengthen their security frameworks and meet industry standards. Our experts can provide tailored advice and actionable strategies to ensure that your organization is on the right track.

Here’s how we can further support your cybersecurity efforts:

New Membership for Cybersecurity Research: We’ve launched a new membership offering that provides exclusive access to cutting-edge cybersecurity research, helping organizations stay ahead of emerging threats and compliance challenges. Members also benefit from direct access to our analysts and advisors, offering personalized guidance to navigate regulatory changes like NIS2 or tackle specific cybersecurity issues your organization may face. The cyberevolution 2024 Event in December: Don’t miss our upcoming event, cyberevolution2024, taking place December 3-5, 2024 in Frankfurt, Germany. This event will bring together cybersecurity practitioners, industry experts, and thought leaders to discuss the latest trends, challenges, and solutions in the cybersecurity landscape. The conference will feature a wide range of tracks covering critical topics like NIS2 compliance, Zero Trust, identity-centric security, and much more. It’s the perfect opportunity to network with peers, learn from top experts, and gain insights that can help you implement robust cybersecurity measures.

The deadline may be today, but the journey is just beginning.


The EUDI Wallet: A First Step on Germany’s Way Into a Flexible Digital Identity Future

by Martin Kuppinger Germany has officially launched the German part of the EUDI Wallet initiative, a significant advancement in digital identity management as part of the broader European eIDAS 2.0 regulation. The recent announcement made by the German Ministry of the Interior was about the way forward until the planned availability in 2027 and openness for private 3rd parties to also provide cer

by Martin Kuppinger

Germany has officially launched the German part of the EUDI Wallet initiative, a significant advancement in digital identity management as part of the broader European eIDAS 2.0 regulation. The recent announcement made by the German Ministry of the Interior was about the way forward until the planned availability in 2027 and openness for private 3rd parties to also provide certified EUDI wallets in Germany.

The EUDI Wallet initiative, with every member state being obliged to provide such wallets to the citizens by 2027, enables citizens to carry secure digital IDs on their smartphones, making the digital future more accessible. Yet, the impact of this move extends far beyond simply digitizing ID cards; it opens up a world of potential for decentralized identity, cross-border verification, and secure data sharing.

Flexibility Through Openness: A New Paradigm for Digital Identity

One of the most exciting aspects of the EUDI Wallet is its open architecture. Instead of being limited to a single governmental solution, users will be able to choose from various wallets provided by both public and private entities. This marks a departure from older, more restrictive digital identity systems. The result? A competitive marketplace where innovation can flourish, but backed by standards for interoperability. Users will benefit from wallets tailored to specific needs, whether for travel, finance, or even healthcare. 

The inclusion of private providers fosters an environment where innovation is not only encouraged but essential. For instance, wallets could soon support more than just digital identification—they could incorporate advanced functionalities like micropayments or even specialized uses in sectors like healthcare, finance, or mobility. Envision having a travel wallet or finance wallet for everything you require for these use cases.

Innovation and Competition: The Role of Private Providers

The key to this transformation lies in the involvement of private companies, which will create new opportunities for competition and innovation. The OpenWallet Foundation, led by Daniel Goldscheider, is already laying the groundwork for open-source, interoperable wallets. This approach could lead to highly dynamic digital wallets that are capable of interacting seamlessly across different sectors and services.

However, the exact requirements for private wallet providers remain a bit unclear. Certification processes and regulatory hurdles could pose challenges. These will need to be addressed to ensure the widespread adoption of private wallets while maintaining security and interoperability standards.

Beyond Basic Functionality: A Multi-Purpose Digital Wallet

While traditional digital wallets have often been confined to single-purpose applications, the EUDI Wallet aims to break this mold. The open ecosystem allows for multi-functional wallets that can serve various purposes. Imagine using a single wallet not just for banking, but also for travel bookings, healthcare appointments, and even identity verification across borders.  Or, as mentioned above, feature-rich apps involving a wallet for such use cases.

This versatility could prove transformational for both consumers and businesses. The ability to incorporate decentralized identities, secure payments, and cross-border functionality into a single solution could unlock significant economic value and streamline a range of processes.

Decentralized Identity: A Game-Changer

Decentralized identities are a cornerstone of this vision. Instead of being controlled by a central authority, decentralized identities allow individuals to have greater control over their personal data. This could revolutionize the way we interact with digital services, providing a higher level of security and privacy. 

Decentralized IDs will also play a critical role in broader, more complex use cases. For instance, secure data sharing and cross-border verification processes will benefit immensely from this technology. This is not just about convenience, but about creating a more secure, interoperable digital future.

The Path Forward: What’s Next for the EUDI Wallet?

Despite the remaining uncertainties, one thing is clear: the EUDI Wallet has the potential to redefine how we think about digital identity. Initiatives like eIDAS 2.0 and the OpenWallet Foundation are laying the groundwork for a future where digital identity systems are not just interoperable but truly dynamic.

As more complex use cases come into play, the value of this technology will become increasingly apparent. For instance, as highlighted in my EIC keynote, the broader application of wallets—whether for decentralized identity verification, application to loans at banks, or onboarding employees—will unlock significant benefits for both users and the economy at large. Germany’s openness for supporting 3rd party EUDI Wallet is hopefully just the beginning, and we can expect significant advancements in digital identity in the coming years.

The EUDI Wallet represents a bold move toward a more open, flexible, and innovative digital identity ecosystem. With the involvement of both public and private providers, the future of digital identity looks promising—and it’s happening now.


Nov 27, 2024: Don’t Let the Endpoints Become the Entry Door for Attackers

Most cyberattacks are identity-based and come in via endpoints. Identity Security on one hand and Endpoint Protection on the other thus are cornerstones of every successful cybersecurity strategy. EPDR (Endpoint Protection, Detection & Response) has evolved as a unified approach that goes beyond traditional anti-malware and EPP (Endpoint Protection Platform) and adds detective and responsive ca
Most cyberattacks are identity-based and come in via endpoints. Identity Security on one hand and Endpoint Protection on the other thus are cornerstones of every successful cybersecurity strategy. EPDR (Endpoint Protection, Detection & Response) has evolved as a unified approach that goes beyond traditional anti-malware and EPP (Endpoint Protection Platform) and adds detective and responsive capabilities. It also closely integrates with further detective and responsive technologies such as XDR (eXtended Detecton & Response).

Ocean Protocol

DF111 Completes and DF112 Launches

Predictoor DF111 rewards available. DF112 runs Oct 17— Oct 24, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 111 (DF111) has completed. DF112 is live today, Oct 17. It concludes on October 24. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF111 rewards available. DF112 runs Oct 17— Oct 24, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 111 (DF111) has completed.

DF112 is live today, Oct 17. It concludes on October 24. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF112 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF112

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF111 Completes and DF112 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 16. October 2024

Northern Block

Northern Block Secures Funding from Natural Resources Canada

Driving Digital Trust in the Mining Sector Through Sustainability Credentials, Enhancing Supply Chain Transparency and Global Accountability. The post Northern Block Secures Funding from Natural Resources Canada appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Northern Block Secures Funding from Natural Resources Canada appeared first on Northern Block | S

Northern Block Secures Funding from Natural Resources Canada to Drive Digital Trust in the Mining Sector Through Sustainability Credentials, Enhancing Supply Chain Transparency and Global Accountability.

We are thrilled to announce that we have been awarded grant funding from Natural Resources Canada’s (NRCan) Global Partnerships Initiative, part of the Canadian Critical Minerals Strategy. This investment is not only a testament to the prior milestones we have accomplished in the mining ecosystem, but also an exciting leap forward in our mission to ‘credentialize’ sustainability data and enhance supply chain transparency across Canada’s critical minerals sector.

Transforming the Mining Ecosystem with Digital Credentials

Our ongoing work in the mining sector has centered around digitally transforming key sustainability metrics, particularly those aligned with the Towards Sustainable Mining (TSM) standard, into standards-based digital credentials. These credentials allow mining companies to demonstrate their commitment to responsible business practices and make that data securely available to key stakeholders, including investors and supply chain participants. The momentum we’ve built with the Mining Association of Canada (MAC), a strategic partner of ours, has been pivotal. We are now building on the successes we’ve already demonstrated, such as enabling mining operators to self-issue verified TSM reports as digital credentials. This progress was also made possible by the leadership and investment of the BC Government in the Energy and Mines Digital Trust (EMDT) program. This has resulted in greater transparency and trust in the supply chain, as highlighted in our previous work. This new federal funding will further accelerate the scale and impact of our efforts, ensuring that digital credentials become an integral part of the global mining ecosystem.

With NRCan’s support, Northern Block will continue to streamline and digitize the reporting processes for MAC members, transforming sustainability reports into secure, digital credentials. These efforts directly support global sustainability initiatives like the United Nations Transparency Protocol, ensuring that data from initiatives such as TSM can be integrated into digital product passports, and contribute to more transparent and sustainable supply chains globally.

Expanding the Reach of TSM with Digital Product Passports

Our efforts, led by the Mining Association of Canada (MAC) and the Towards Sustainable Mining (TSM) standard, are setting the benchmark for sustainability data in the mining industry. TSM has taken the lead in creating verifiable digital credentials that enhance transparency and trust across the entire supply chain. As these digital credentials gain traction, we’re seeing increasing interest from other global standards bodies, eager to follow the path MAC has paved.

Our work ensures that TSM credentials will serve as a foundational data source for emerging digital product passports, making it easier for mining companies to securely share critical information on sustainability, ethical sourcing, and environmental impact. By positioning TSM at the forefront, we’re not only adding value to MAC members but also shaping the future of supply chain transparency for critical minerals worldwide.

Exciting Releases Ahead for 2024

As we look ahead, we are excited about the upcoming releases in Q4 2024, developed in partnership with the Mining Association of Canada and several of their key members, including major mining operators. These developments will further solidify the role of digital credentials in the mining industry and reinforce Canada’s leadership in responsible mining practices.

Thanks to the support from Natural Resources Canada, we now have a three-year roadmap that allows us to double down on building and expanding this ecosystem. This grant represents a significant validation of the work we’re doing and the value we’re creating, not just for our current partners but for the entire critical minerals ecosystem. We’re eager to leverage this momentum and continue driving innovation across the mining sector.

For more information, please contact:

Northern Block:

Website: northernblock.io Email: Mathieu Glaude, Founder & CEO – mathieu@northernblock.io The post Northern Block Secures Funding from Natural Resources Canada appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Northern Block Secures Funding from Natural Resources Canada appeared first on Northern Block | Self Sovereign Identity Solution Provider.


KuppingerCole

Dec 17, 2024: HigherEd CIO Virtual Summit: Driving IT Efficiency With Automation of Student Matriculation and Access Governance

In the post-COVID era, higher education institutions face unprecedented challenges in managing student matriculation and access governance. With IT departments stretched thin and an influx of in-person students, the risk of over-provisioning and compliance violations has skyrocketed. Balancing efficiency with data security and privacy concerns with FERPA, HIPAA, and other regulations has become a c
In the post-COVID era, higher education institutions face unprecedented challenges in managing student matriculation and access governance. With IT departments stretched thin and an influx of in-person students, the risk of over-provisioning and compliance violations has skyrocketed. Balancing efficiency with data security and privacy concerns with FERPA, HIPAA, and other regulations has become a critical issue for HigherEd CIOs and IT professionals.

Nov 20, 2024: Transforming SOCs: The Power of SOAR Solutions

Cyberattacks are becoming increasingly sophisticated, requiring innovative approaches to cybersecurity. This webinar will explore how Security Orchestration, Automation, and Response (SOAR) platforms can revolutionize incident response by providing security teams with advanced threat detection and mitigation tools. We'll discuss the challenges of traditional SIEM systems and the transformative pote
Cyberattacks are becoming increasingly sophisticated, requiring innovative approaches to cybersecurity. This webinar will explore how Security Orchestration, Automation, and Response (SOAR) platforms can revolutionize incident response by providing security teams with advanced threat detection and mitigation tools. We'll discuss the challenges of traditional SIEM systems and the transformative potential of integrating generative AI into SOAR solutions.

Tuesday, 15. October 2024

KuppingerCole

A False Sense of Security: Authentication Myths That Put Your Company at Risk

In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies. Modern

In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies.

Modern technology offers advanced solutions to address these authentication challenges. By leveraging dynamic risk flows, adaptive authentication methods, and comprehensive identity management systems, organizations can significantly enhance their security posture. These technologies enable a more nuanced and effective approach to authentication, moving beyond static, one-size-fits-all solutions.  
Paul Fisher, Lead Analyst at KuppingerCole, will moderate this insightful session. He will guide the discussion, ensuring that key authentication myths are thoroughly examined, and that practical, actionable insights are shared with the audience. His expertise will help frame the conversation within the broader context of identity and access management.

Stuart Sharp and Alicia Townsend from One Identity will investigate common authentication myths, such as the adequacy of MFA alone and the perceived security of certain authentication methods. They will provide strategies for identifying user communities, classifying risk by application, and developing dynamic authentication flows to reduce lateral movement risks and enhance overall security.  




Dock

Governance Vote: Changes to the Validator Structure

As part of the planned merger of the DOCK token and blockchain with the cheqd blockchain, that DOCK token holders approved, the network is set to sunset in 2025. With this significant transition on the horizon, we are proposing proactive measures to safeguard the network from potential volatility and ensure

As part of the planned merger of the DOCK token and blockchain with the cheqd blockchain, that DOCK token holders approved, the network is set to sunset in 2025. With this significant transition on the horizon, we are proposing proactive measures to safeguard the network from potential volatility and ensure its stability during this period. 

In light of this, we are proposing an important change to the validator structure: reducing the active set of validators from 50 to 20. 

Additionally, we propose raising the self-bond requirement to 1 million tokens for any entity wishing to participate as a validator.

If you are currently among the top 20 validators, measured by total bonded amount, this change will not affect your status. However, to remain an active validator, you must be in the top 20. We recognize that this action may reduce the overall decentralization of the network in the short term. However, this is a necessary measure designed to ensure the smooth management of the sunsetting network. Once the token migration begins, we will monitor the state of the network and may need to adjust these settings again.

This proposal is now open to voting for DOCK token holders. You will have 7 days to cast your vote. If approved, the changes will be automatically enacted at the conclusion of the voting period.

Vote here: https://fe.dock.io/#/democracy

We value your participation in this important decision.


KuppingerCole

Beyond the CE Mark: How the Cyber Resilience Act Redefines Product Security

by Martin Kuppinger In three years, the familiar CЄ mark will take on a new role: signaling compliance with robust cybersecurity standards. While this might sound like just another consumer-facing regulation, it’s actually part of a much larger transformation under the EU’s Cyber Resilience Act (CRA). This legislation is not merely about putting a sticker on products; it marks a shift in how

by Martin Kuppinger

In three years, the familiar CЄ mark will take on a new role: signaling compliance with robust cybersecurity standards. While this might sound like just another consumer-facing regulation, it’s actually part of a much larger transformation under the EU’s Cyber Resilience Act (CRA). This legislation is not merely about putting a sticker on products; it marks a shift in how security is integrated into the lifecycle of everything from household devices to vehicles. If there’s software in a product, security must be built in from the very beginning—by design, not as an afterthought.

From NIS2 to UNECE R155: Security by Design

The CRA sits alongside other crucial regulations aimed at fortifying Europe’s digital ecosystem. The NIS2 Directive, for example, broadens the scope of the original NIS directive to include critical sectors like healthcare, energy, and transportation, enhancing the security of network and information systems. It enforces stricter requirements for incident reporting and proactive risk management, directly addressing today’s complex threat landscape.

Meanwhile, in the automotive sector, UNECE regulations R155 and R156 are revolutionizing how vehicles are secured. UNECE R155 requires manufacturers to implement cybersecurity management systems (CSMS) to prevent hacking and cyberattacks, while UNECE R156 ensures that vehicle software remains up to date, mandating secure over-the-air (OTA) updates. These regulations cover both new and existing models, forcing manufacturers to rethink how they protect connected vehicles throughout their entire lifecycle.

Cybersecurity Costs Hit Fiat 500

These regulatory shifts are already making waves in industry. A very tangible example is Fiat’s decision to end production of the beloved Fiat 500, after 17 years and millions of units sold. The reason is that the costs of retrofitting older models to meet the stringent UNECE cybersecurity standards, specifically R155 and R156, proved too high. Fiat is not alone; other manufacturers may also find it challenging to upgrade their legacy systems to meet new requirements, signaling the profound impact these regulations will have across the automotive sector.

The Cyber Resilience Act: Security at the Core

The CRA is part of a broader regulatory effort to ensure that every digital product—not just cars—meets strict security standards. More than a compliance measure, the CRA enforces security-by-design, a principle that requires manufacturers to anticipate and mitigate cyber threats from the earliest stages of product development. This shift has implications far beyond product safety; it also affects the entire supply chain, as vendors and partners must meet the same high standards.

No longer can companies afford to treat cybersecurity as an afterthought. It’s now at the heart of digital business, impacting not only product design but also how products are maintained and updated over time. In an era where every connected device is a potential target, this approach ensures resilience in the face of evolving threats.

Future-Proofing Digital Europe

What we’re seeing is a clear message from the EU: cybersecurity must be baked into every layer of product development and supply chain management. The CE mark may be the most visible sign of this change, but behind it lies a robust legal framework designed to safeguard the future of Europe’s digital economy. From vehicles to consumer devices, the CRA and related regulations like NIS2 and UNECE R155/R156 are reshaping how businesses design, deploy, and secure their products.

The era of retrofitting old models with new security patches is coming to an end. For businesses (and every business is a digital business nowadays), now is the time to embrace cybersecurity as a central pillar of their product strategy and corporate strategy. Anything less, and they risk not just regulatory penalties but losing the trust of consumers in a world where digital safety is paramount.


Elliptic

Crypto regulatory affairs: Singapore consults on new digital payment token licensing and compliance guidelines

The Monetary Authority of Singapore (MAS) is planning to establish strict licensing criteria for cryptoasset firms that serve an international clientele from Singapore. 

The Monetary Authority of Singapore (MAS) is planning to establish strict licensing criteria for cryptoasset firms that serve an international clientele from Singapore. 


Tokeny Solutions

RWA and DePIN: The Future of Assets and Infrastructure

The post RWA and DePIN: The Future of Assets and Infrastructure appeared first on Tokeny.
Blog 15 October 2024 RWA and DePIN: The Future of Assets and Infrastructure What is RWA?

In the blockchain world, Real World Assets (RWA) refer to tangible, physical assets with economic value, such as real estate, gold, vehicles, and art. Tokenizing these assets offers three main benefits: it opens the door for more people to invest by lowering barriers to entry, enables easy transferability—similar to sending a PayPal transaction—and allows the assets to be used in decentralized finance (DeFi) applications, such as providing liquidity in an AMM or using them as collateral to borrow tokenized cash.

To fractionize the ownership of these RWA, it often turns the assets into financial instruments. Typically, this involves creating an investment vehicle like a Special Purpose Vehicle (SPV) to hold the underlying asset. Tokenization is the process of representing ownership of financial instruments such as shares or debts of the SPV as tokens on a blockchain, allowing for digital purchase, self-custody, easy transfers, and usage of assets. These tokens represent securities and must comply with strict regulatory rules, only qualified investors meeting regulatory conditions can trade and hold them.

In most cases, ERC-20 standard should not be used for tokenizing RWA as ERC-20 tokens are permissionless, allowing the transfer to anyone without restriction. However, bearer instruments are illegal in most jurisdictions. This is where permissioned tokens using the ERC-3643 standard become vital. They ensure that only qualified users can hold them, which is crucial for compliance with regulations.

RWA market is one of the fastest growing markets in the blockchain industry that has reached an all-time high of $12 billion tokenized, according to a Binance Research report. However, this figure doesn’t fully capture the market’s scale.

“At Tokeny alone, we’ve facilitated the tokenization of more than $32 billion worth of assets onchain.”

Shurong Li, Head of Marketing at Tokeny

“At Tokeny alone, we’ve facilitated the tokenization of more than $32 billion worth of assets onchain.” Shurong Li Head of Marketing

Many of our clients choose not to make their data publicly available, as these are often private assets. Additionally, large institutions face challenges in accepting onchain cash due to regulatory uncertainty and they still prefer to invest in fiat. Given the current scale, we expect this market to grow significantly in the coming years.

What is DePIN?

Decentralized Physical Infrastructure Networks (DePIN) is an emerging concept where decentralized networks are used to manage and operate physical infrastructure. These networks include cloud services, wireless networks, sensor networks, mobility and energy networks. DePIN networks incentivize individuals to contribute to the bootstrapping phase of growth without relying on outside resources. This means individuals are incentivized by tokens to build up the supply of infrastructure without the need for centralized operators. DePIN addresses the issue that traditional centralized infrastructure, operated by corporations, requires a significant investment of time and money for both building and maintaining infrastructure, making it nearly impossible for individuals to build networks.

The main drive of DePIN systems is for Web3 companies to outsource all the building and maintenance of these network infrastructures. Take Hivemapper, for example, a decentralized digital map of the world (sensor networks). They provide users, known as “mappers” with a dashcam to drive around and capture real life images of everything they pass. This is one of the methods to build and maintain the infrastructure of this network. The incentive for the individuals contributing is earning tokens that hold monetary value, which can be redeemed to access premium map data and participate in governance decisions. The more a user contributes, the more infrastructure is built and maintained, the more tokens they receive as incentive.

What is the Difference Between the Two?

Although RWAs and DePIN both interact with the physical world, they have different use cases and operate in distinct ways. These differences include their purpose, the markets they operate in, the regulations involved, and the concept of ownership versus contribution.

RWA operates in the financial sector, involving tangible real-world assets like real estate, gold, or art that are converted into tokens representing fractionalized ownership. These tokens can be bought, sold, and traded among authorized investors.

“To ensure compliance, RWAs must strictly follow regulations, often using permissioned tokens such as ERC-3643.”

Luc Falempin, CEO of Tokeny

“To ensure compliance, RWAs must strictly follow regulations, often using permissioned tokens such as ERC-3643.”

Luc Falempin, CEO of Tokeny

The goal of RWA is to democratize the investment and ownership of physical assets, making them more accessible to a wider range of investors through tokenization.

In contrast, DePIN focuses on decentralizing the construction and maintenance of networks in the infrastructure sector. Instead of tokenizing existing assets, DePIN networks incentivize individuals to contribute physical resources such as server hosting, energy storage, and data collection. Contributors earn tokens that often provide exclusive benefits and hold monetary value in exchange for their participation. DePin faces fewer regulatory challenges since it involves contributions to infrastructure rather than ownership of assets.

On the other hand, both RWA and DePIN require onchian identity management. For RWA, onchain identity ensures compliance by verifying KYC status and guarantees unlosable ownership. Tokenized RWAs also have their onchain identity, allowing for enriching the data associated with the assets themselves. In the case of DePIN, without robust verification of devices or service providers contributing to the network, there’s a risk of payouts being claimed fraudulently, which can harm the network’s performance. This makes decentralized identity (DID) frameworks crucial for DePIN as well.

The ONCHAINID, an open-source DID framework used in ERC-3643, is an excellent solution. A verifier can conduct necessary checks and issue verifiable credentials as proof. This ensures that only properly functioning devices and valid contributions to the network are recognized, maintaining the integrity and sustainability of the system and enhancing its overall performance.

What is the Opportunity to Make the Two Work Together?

Combining RWA and DePIN presents a significant opportunity to transform both financial investments and infrastructure development. Together these sectors can push forward growth and innovation in the form of a hybrid ecosystem.

Tokenization of Infrastructure: Co-Ownership of DePIN Devices: The real-world assets, devices, such as renewable energy systems or critical IoT infrastructure, can be costly for individual investors. By tokenizing the ownership, for example through the units or shares of a fund that will invest in one or many DePIN devices, people can co-own multiple DePIN devices. The key benefits are listed below.

Improved Accessibility: Allowing individuals to co-own expensive infrastructure devices. This opens up investment opportunities for a wider pool of participants, making it possible for people to co-own high-value assets like solar panels or data nodes. Enhanced Transferability: Unlike physical devices, which can be difficult to sell or exchange, fractional ownership can be more easily traded. Moreover, tokenization enables peer-to-peer transfer to enhance transferability and eventually increase liquidity of the assets. New Opportunities and Stability of DePIN Network: Beyond just owning a piece of the infrastructure, the tokenized shares can also be used in DeFi applications. Investors can provide liquidity, stake, or use these tokens as collateral to generate additional financial yield, unlocking even more value from their co-ownership in the infrastructure assets while not needing to sell the devices to ensure stability of the DePIN Network. Conclusion

In conclusion, RWAs and DePIN, while distinct in their purpose, share common ground in turning the physical world digital. The opportunity to combine these concepts opens the door for innovative applications in finance, infrastructure, and decentralized economies, creating more accessible, efficient, and resilient systems for managing physical assets and infrastructure globally. As blockchain technology continues to evolve, the synergy between RWAs and DePIN could be crucial in shaping the next wave of decentralization.

SUBSCRIBE TO OUR INSIGHTS Institutional Tokenization 3.0: Break Silos 21 October 2024 RWA and DePIN: The Future of Assets and Infrastructure 15 October 2024 AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny 24 September 2024 21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets 10 September 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance 6 September 2024 ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization 3 September 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption 1 August 2024 Hex Trust and Tokeny Partner to Accelerate Institutional RWA Tokenization 31 July 2024 MOCHAX and Tokeny Partner to Provide Unprecedented Values to Equity Investment through RWA Tokenization 18 July 2024 Tokeny Expands Chain Support to IOTA EVM 4 July 2024

The post RWA and DePIN: The Future of Assets and Infrastructure appeared first on Tokeny.


Dark Matter Labs

#1 Are we coding too soon? — Day 3

#1 Are we coding too soon? — Day 3 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing partici
#1 Are we coding too soon? — Day 3

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

Day 3: Product scoping — balancing strategy and feasibility

On the third and fourth days of the workshop, we started sketching wireframes based on the user journey. This required merging the two scenarios we developed, creating a coherent flow, and listing out both the technical and UI/UX requirements.

Once we layed out the entire journey, we quickly realised that a significant part of what we were building was in fact quite similar to a typical booking platform. There were two parts of the system that were interdependent: 1) a booking system that allows a user to list spaces and book events, and 2) a new permissioning system that introduces alternative ways to approve bookings, allows users to be part of permissioning groups and create rules.

Due to restrictions of time and capacity (4 months of development time), we had to prioritise, which meant we had to decide which part of the system we were going to build.

Through some debate, we came up with three potential strategies and decided to choose one.

Maximise Experimentation
In this approach, we aimed to minimise the necessary development efforts, particularly for features already common in the market. By doing so, we could redirect our development capacity towards creating interactive prototypes that facilitate permissioning experiments. This included exploring scenarios such as “How would a liability group come together?” and “How would this permissioning group share decision-making responsibilities?” (Focusing on permissioning system)

Risks:

There is a risk that funders may not support this approach, as the booking system may not be fully functional. The development timeline could be delayed because the prototype is not fully specified yet, meaning the development team would need to wait until the prototype design is finalised.

Opportunities:

Focusing on experimentation benefits future pathway building especially innovation funding

Maximise Potential for Real Users
This strategy recognises the significant impact of involving real users at the end of the process. We proposed developing a functional booking system while continuing to explore the formation of liability groups and the permissioning mechanism through design workshops. (Focusing on booking system)

Risks:

Misalignment with the broader Dm identity and portfolio: investing too much time in developing features that are already available in the market may not align with Dm’s vision and strategic goals. Deadline Pressure: even if we allocate all development capacity to building a fully functioning system, we may still struggle to meet the 4-month deadline. Project objectives: We want to validate our concept around permissioning through this first phase, and we cannot do that by building a booking system. Funding risks: it depends on what kind of funding we are going for, but innovation funders will want to see the innovation.

Opportunities:

Foundation for future experimentation: Developing functional software provides Dm with solid tools and a platform for future experimentation. High-quality delivery: This approach ensures that we deliver a fully functional system, likely to perform well in assessments. User ready: If we can have real users, we can apply for other types of funding e.g. specifically for product development

Interoperable Permission Engine
In this direction, we focused on the importance of a fully functional user flow while dedicating our development efforts to creating versatile digital tools for experimentation. This includes developing an interoperable permissioning plugin compatible with existing booking systems. (Focusing on permissioning system)

Risks:

Can end up developing all the fullstack systems for the MVP. It’s uncharted territory so we might underestimate the development time needed.

Opportunities:

Can focus on building more value aligned outcomes. Can acquire a broader range of potential users than running a single platform ourselves. Can provide more dynamic types of usage by having some flexibility on scopes of entities who host the permission engine.

We ended up choosing the third option, which was suggested by our backend developer Donghun. The reason why we documented this lengthy debate and decision-making process is because it triggered a lot of critical, fundamental questions and areas for clarification.

What does product development in Dark Matter Labs look like?

Triggered by the debate around feasibility and vision, we had a chance to reflect on the tensions caused by different priorities. As a collective of individuals primarily trained in architecture, design, and policy, Dark Matter Labs as an organisation doesn’t resemble a typical tech start up. Then what does a product development journey look like in our unique context? How is the product that we are striving to develop different, and how should the development journey be adapted to work in our current team dynamics, without compromising delivery? We don’t assume that we would be able to answer these questions right now, but we document here some of the reflections that emerged in our conversations during the workshop.

How is our product different?

We all agree that Dark Matter Labs is not a tech start-up trying to make a product that responds to market demands. We are more of a strategic design and research lab interested in elucidating systemic problems and developing experimental products that can provoke, and perhaps, solve some of these fundamental issues. In recent years, we’ve moved beyond crafting narratives that provoke thought, to actually building products that do both — provoke and solve problems. Circulaw is a good example of such a product built with actual users in mind. Having developers on the team who were involved in building products like Circulaw (and other market-ready solutions) gave us the opportunity to raise critical questions.

Product development at Dm presents unique challenges and opportunities, particularly when addressing systemic issues rather than simply filling market gaps or meeting unmet needs. Can we really build a product that addresses the problem of ownership and centralised governance? How far can we go in embedding our critical (but speculative) ideas into a product? Will people even understand and appreciate it? (Even our blogs are notoriously difficult to read). Who is our primary audience or user? Building a product that requires significant upfront resources and diverse capabilities compels us to answer these questions from the outset.

Are we coding too soon?

During the workshop, we had an opportunity to reflect critically on our current approach to transitioning projects into products, particularly how this process affects developers within Dark Matter Labs. One key takeaway was the importance of having a robust paper prototyping phase to validate key concepts and hypotheses before coding begins (tensions could emerge when project holders underestimate the labour of coding — and the labour of having to re-write it). This phase, alongside thorough user interviews and testing, would help refine smaller details early on. From a developer’s perspective, it’s much easier to focus on how to build something if the what has already been clearly defined. As Donghun pointed out, getting these what questions sorted beforehand allows developers to focus on building a product with technical integrity, without worrying about shifting goals.

There are definitely advantages to loosely structured projects within Dm which have been our default pattern — the ability to adapt to changing contexts, being open to radical iteration — but product development requires a different level of investment and nature of collaboration, which in turn demands new structures and practices. Perhaps it’s useful to clearly distinguish the paper-prototype phase supported by workshops, before attempting to start building digital prototypes.

We also realised there was room for improving how strategic designers and developers work together. How can we ensure smoother handovers from concept to execution? Developers thrive when they work on projects with real-world applications — projects that go beyond one-off workshop tools and are sustained long enough to generate meaningful data for future iterations. This sense of continuity and contribution is crucial for developer growth. Ideally, we envision a scenario where designers and developers co-create provocative projects that go live to meet real user needs, operating for a sufficient time to gather the data necessary for iteration and future improvements. This way, developers get a sense of growth and contribution, knowing their work has a lasting impact.

Wrapping up the workshop and looking ahead

This concludes the documentation of our first in-person workshop focused on product scoping. It wasn’t a very structured workshop at the beginning, but we managed to build the necessary structures and processes that allowed us to move to the next stage.

Defining the horizons Deliberating on core principles and values of the product (more suggestions collected throughout the week) Designing two types of scenarios and user journeys Merging two scenarios into one user journey and sketching paper wireframes Prioritising what to develop/code Discussing pathway strategies Ideating around branding/identity Identifying questions for the future (collected throughout the workshop)

These were some of the concrete steps we took, with countless conversations in between. As we move on to the next phase of production, we hope that this documentation will serve as a template for teams that are looking to explore (digital) products — bridging strategic design and product development, and making the move towards transitioning projects into products.

Lastly, we share some questions that we identified throughout the workshop, which we have ‘parked’ for now.

Do we need everything to be decentralised? How far does decentralisation go? What kind of deliberation and decision-making model would the permissioning group adopt? E.g. consensus-based, and what is the reasoning? How do we help space stewards(permissioning group) shape the rules of the space? What kind of facilitation is needed? Will financial values be generated by spaces? How do we deal with financial value without encouraging rent-seeking behaviours? How could Horizon 1 look different from the current system (while still operating within existing systems) How to convince cities of new ways of doing things? What is “functionality” for research grant funders? And how do we best meet their expectations regarding tech products? (especially funders who are not typical product development funders)

Read Day 1: Transitioning from project to product

Read Day 2: User journey and scenario building

This blog was written by Eunsoo Lee in conversation with the core team of Permissioning the City and utilising the records of the workshop.

Team members who contributed to the workshop (in alphabetical order):
Calvin Po, Donghun Ohn, Eunji Kang, Eunsoo Lee, Fang-Jui ‘Fang-Raye’ Chang, Hyojeong Lee, Shuyang Lin, Theo Campbell

Wider advisory group:
Indy Johar, Hee Dae Kim, Gurden Batra, Charlie Fisher

Partners and funders:
NIPA(National IT Industry Promotion Agency), P4NE(Partners for a New Economy), Parti

#1 Are we coding too soon? — Day 3 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


#1 User journey and scenario building — Day 2

#1 User journey and scenario building — Day 2 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introduc
#1 User journey and scenario building — Day 2

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

Day 2: User journey and scenario building

On days two and three, we focused on developing the user journeys. The emphasis was placed on creating tangible, realistic use case scenarios which would help us identify the gaps in our concept and challenge where we might be relying too much on theory and assumptions.

We created a template that divides the system into front stage(frontend) — covering user actions and visible interfaces — and back stage(backend), which handles the behind-the-scenes logic and processes supporting these interactions. We also listed some choices for scenario building, such as types of permissions, users, and spaces.

Feel free to adapt our template

We decided to prioritise the event organiser and space stewards (space owners, managers, broader stakeholders like neighbours) and split up into two groups, with one group focusing on a scenario around a music event, which deals with pre-defined/automated permissions and an exception approval case, and the other on a food related event that deals with bespoke permissions. We chose music and food specifically as these scenarios are likely to introduce tensions or conflicts. Noise level issues would allow us to explore how we might use sensors to verify and give real-time feedback to space users, preventing the escalation of conflict, and food/cooking would allow us to dig deeper into the liability mechanisms around fire risks, safety and hygiene.

Group 1 (left) on food/cooking and Group 2 (right) on music

Through the user journey exercise, we were able to clearly distinguish the differences between the three types of permission processes: pre-defined/automated, exception-based, and bespoke permissions.

Pre-defined/automatic
When requesting permissions to use a space, users will be able to choose from an existing template of rules. Template A might be suitable for loud music events with more than 50 people, template B might be suitable for small cooking classes, and template C for book clubs. These templates of rules (or rulebooks) can be initially drafted based on general liability considerations (number of people, types of activity, etc), and further adapted and modified through usage in a particular space. The idea is based on a precedent-based model, similar to case law: if permission has been granted previously for a kind of event without issues, similar future events will also be automatically permitted. For most events that can be classified under certain types of activities, these pre-approved templates will enable instant permissions, simplifying and speeding up the booking process.

Exception-based
Exceptions are cases where users might request small exceptions which need human approval. In one of our scenarios, Pim, who was hosting a religious worship event involving music performance, wanted to request to increase the maximum noise levels allowed. This involves modifying a single clause in one of the template of rules. The request is processed by a ‘permissioning group’ — a group of people who have opted-in to act as a steward of a particular space, who have the responsibility to partake in decision-making as well as maintaining the space. The members of the permissioning group make a consensus based decision, whether to approve or disapprove this exception. They are also made aware that the adapted permission they grant will also become a template for future events.

Bespoke permissions
Bespoke permissions are reserved for rare cases where users are requesting permissions for a completely new type of activity which does not fit into any of the pre-approved templates. In our scenario, this was a proposal for a large local produce market held in a park. A request for bespoke permissions triggers a slightly more complex set of actions than the exception-based permissions. The event organiser will be prompted to construct a new template of rules based on the activity proposed. They will also fill out a self risk assessment form indicating their concerns and what they are excited about (the pros and cons). Submitting this request will trigger a notification to the permissioning group who will be given a due date to arrive at a consensus to approve or reject the new template. Once accepted, the event is permitted, and also future events with the same characteristics can use the template for automatic permission.

Principles and values

Through the user journey exercise which compelled us to sketch out the details of each action and process, we were also able to define the basic principles of the platform which reflect the underlying logic and values of our concept. They will be used to further define key concepts like the permissioning group, template of rules, feedback systems, incentives and liability mechanisms, and so on. These principles and values can be considered as version 1, which will be iterated later when we have more experience to draw upon.

Governance

Power and liability as inextricably linked: If you want to make decisions you need to share liability i.e. skin in the game

Prioritise proximity to space and physical presence (linked to shared risk and liability) Giving away power is giving away liability (which is why space owners might want to share decision making/permissioning power)

Permissioning based on precedents (like case law)

Every space starts with basic template of rules which are iterated thereafter Everything is allowed (within legal limits) until something happens to change the rules Templates need to be updated regularly (time limited templates)

Permissions are peer reviewed (e.g. permissioning group)

Permissioning group performs the role of space stewards — responsible for maintaining permission templates, approving bespoke permissions Anyone can join a permissioning group Initial permission group can be formed through a combination of invitations (based on shared liability holders) and self opt-in through shared interests Deliberations within permissioning group prioritise consensus building — through dialogic processes (rather than majority rule) Permissioning group participants are given a choice to opt-out of a particular decision

Incentives

Prioritise system level of risk and benefit sharing — to avoid rent seeking behaviours Prioritise generating system level of incentives/benefits (rather than personal/individual)

Feedback

Based on incentives and positive feedback at the system level rather than penalties and punishment at the individual level Encourage feedback on rules/permissions not people and their conduct

Technology

We adopt technology not to maximise efficiency and profit, but to enable greater flexibility and freedoms. We acknowledge that technology could be exclusionary, and while we may not be able to address this immediately, we are committed to designing systems that prioritise inclusivity and accessibility. By embracing open standards and decentralisation, we aim to create tools that empower communities rather than control them.

Read Day 1: Transitioning from project to product

Read Day 3: Are we coding too soon?

This blog was written by Eunsoo Lee in conversation with the core team of Permissioning the City and utilising the records of the workshop.

Team members who contributed to the workshop (in alphabetical order):
Calvin Po, Donghun Ohn, Eunji Kang, Eunsoo Lee, Fang-Jui ‘Fang-Raye’ Chang, Hyojeong Lee, Shuyang Lin, Theo Campbell

Wider advisory group:
Indy Johar, Hee Dae Kim, Gurden Batra, Charlie Fisher

Partners and funders:
NIPA(National IT Industry Promotion Agency), P4NE(Partners for a New Economy), Parti

#1 User journey and scenario building — Day 2 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


#1 Transitioning from project to product — Day 1

#1 Transitioning from project to product — Day 1 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through intro
#1 Transitioning from project to product — Day 1

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

In June 2024, we received good news from one of the many applications we submitted to develop the Re:Permissioning the City platform. This specific grant awarded by the National IT Industry Promotion Agency (NIPA) of South Korea, allowed us to spend the next 5 months developing the first digital prototype. Having spent the last 3 years developing the concept through small research grants, we were overjoyed to finally have the opportunity to start building something tangible.

Once we assembled the team, composed of three developers, a graphic designer, and four strategic designers, we gathered in London for a week-long workshop. Looking back, it was an ambitious, high-stakes plan that required turning theory into a concrete product design in a matter of 5 days. We were betting on our combined ‘collective intelligence’ to figure out this challenge together.

Day 1: Defining the problem space and scope of our intervention

Like any ‘design & innovation’ project, we started by collectively defining and narrowing down our area of intervention. We did this through discussing the problem space, our objectives and value proposition, and through defining the various ‘horizons’ of the product we were setting out to build.

Problem space

Fairness in allocation of spaces: in the case of Daegu and other public/government-owned spaces, the current process for allocating shared spaces is seen as unfair. For example, a simple first-come-first-served approach often fails to prevent hoarding of use rights (whoever has more resources to submit applications has a higher chance of gaining rights). Seen in the case of the public square in front of Seoul City Hall where right-wing Christian groups deliberately submit applications ahead of LGBTQI+ organisations to prevent them from hosting the queer festival, existing rules can be abused to discriminate against certain groups, which challenges fairness and ethics of existing governance models. Fairness in decision-making: existing governance around spaces are centralised and opaque. Either controlled directly by space owners, or rules set by intermediary organisations entrusted to manage spaces. Ordinary ‘users’ of spaces and other stakeholders (neighbours and others who have a stake) are almost always excluded from the rule-making and permissioning process. Public value captured in private wealth: we challenge rent-seeking, private ownership models, where 1) public spaces are used to generate private wealth or 2) value generated by the public e.g. rehabilitation through community activities gets captured solely by land/space owners. The focus is on ensuring that public spaces are used in a way that benefits the community rather than being a source of income for private entities. Decision-making based on individual interests: we advocate for a decentralised commons-based approach to decision-making. The use of spaces in the city is rarely a concern for the property owner alone. Rather, how spaces are used will affect third parties in positive and negative ways, and also the health of the city as a collective whole. This means decisions on how spaces are used should be made collectively, considering the public or commons’ good rather than individual or organisational interests. The idea is to create a system where the use of space benefits the broader community. Underutilisation of spaces: the current approach to managing public spaces is bureaucratic, which creates barriers to access and results in underutilisation. Even when spaces are managed by single entities (often NGOs and civil society organisations with a specific mandate), it takes a lot of resources to maximise utilisation, costs they often cannot afford. Barriers to access: it’s not easy for the average citizen to find spaces to do stuff — often, spaces are hard to find (no central database), and then there are restrictions on types of use which can be difficult to navigate. Rules are restrictive: existing rules around spaces (what you can do and not do) tend to be overly conservative, geared towards preventing potential conflicts. When people want to ask for bespoke permissions (if their activity does not fit into existing types of use), existing booking systems lack processes to easily process these requests, instead reverting to ad-hoc, off-platform negotiation (or outright rejection). We need different kinds of rules and methods of negotiation that can ‘liberate’ spatial use, to accommodate more flexible and creative uses of space.

Hypothesis

Our hypothesis is that creating a system that enables easier (and democratic) access to public space for communities will remove barriers for people wanting to organise activities that generate social/cultural capital and public value. This will result in increased civic activity in a city (especially key for cities experiencing demographic/economic/social decline), which has broader societal benefits (reduced isolation, better mental health, less division).

How is what we are building different from conventional booking systems? Why is this way of doing things better?

Democratic: it opens up decision-making/rule-making around shared spaces to a wider range of stakeholders, and by encouraging peer review/approval process, contributes to building democratic capacities. Legitimacy and consent: peer reviewed permission process allows us to gather people’s consent for activities that might not have been possible before. The net effect is that more events can happen in the city (with legitimacy) because we have a more effective way of revealing and implementing the views of the population. Mission-driven: allowing space owners and citizens to make social impact easier rather than just maximising profit from rent-seeking activities. Power distribution and liability sharing: liability and power are interlinked, which means if you have skin in the game, you get to participate in decision-making. The idea is to transition away from ‘externalities’, where negative impacts of an individual’s decisions can be displaced onto the commons. Open source: we are building an interoperable open source tool that people can fork and integrate into their existing systems. Distribution of value: financial value derived from a space (e.g. increase of property prices) is often hoarded by land/space owners. We will try to measure non-financial value generated from civic activities, as well as distribute financial value across more stakeholders.

Horizons scoping

Typically, product teams will create a product roadmap. However, we decided to take a different approach, coming from a strategic design perspective. The key difference between a product roadmap and horizons scoping is that the former is execution focused, while the latter is focused on identifying and assessing different “horizons” or stages of future opportunities, challenges, and strategic goals over a longer period of time. In practice, we adapted elements of both — focusing on describing the hypotheses we wanted to test, while leaving room for uncertainty and more radical imaginations in Horizon 3 as an intended direction of travel.

Horizon 0 reflects the status quo, Horizon 1 is the scope which is narrowed down considerably to fit the timeline and expectations of the 2024 prototype grant. Horizon 2 reflects what we aim to build as the first full product released to the public, and finally Horizon 3 is a description of where our ambitions lie in the future. What we managed to map out during the workshop is in no way complete — in fact the process of mapping alerted us to critical gaps, such as the question of business models and incentive mechanisms, all of which will need to be defined further. But we share this as a snapshot of our thinking at stage 1 of the development journey.

Read Day 2: User journey and scenario building

This blog was written by Eunsoo Lee in conversation with the core team of Permissioning the City and utilising the records of the workshop.

Team members who contributed to the workshop (in alphabetical order):
Calvin Po, Donghun Ohn, Eunji Kang, Eunsoo Lee, Fang-Jui ‘Fang-Raye’ Chang, Hyojeong Lee, Shuyang Lin, Theo Campbell

Wider advisory group:
Indy Johar, Hee Dae Kim, Gurden Batra, Charlie Fisher

Partners and funders:
NIPA(National IT Industry Promotion Agency), P4NE(Partners for a New Economy), Parti

#1 Transitioning from project to product — Day 1 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


Innopay

Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting

Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting from 19 Nov 2024 till 20 Nov 2024 Trudy Zomer 15 October 2024 - 08:59 Amsterdam, the Netherlands Mounaim Cortet, Vice-President of INNOPAY, will be speak
Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting from 19 Nov 2024 till 20 Nov 2024 Trudy Zomer 15 October 2024 - 08:59 Amsterdam, the Netherlands

Mounaim Cortet, Vice-President of INNOPAY, will be speaking at Mobey Forum’s Amsterdam Member Meeting, hosted by ING, on 19-20 November. The event will focus on key themes such as API monetisation, the EU’s Financial Data Access (FiDA) regulation, Embedded Finance and more.

Mounaim will share his insights on the strategic implications of FiDA, the challenges and considerations regarding FiDA schemes, and the strategic responses and opportunities for FIs. He will be joining an impressive lineup of speakers, including:

Katleen Van Gheel, Global Head of Innovation, ING Hetal Popat, Director of Open Banking, HSBC Joris Hensen, Founder and Co-Lead, Deutsche Bank API Programme Vjekoslav Bonic, Head of Digital Channels & AI, Raiffeisen Bank International AG Gijs ter Horst, COO, Ximedes Patrick Langeveld, Open Banking Expert, ING

 

This event is open exclusively to Mobey Forum members, who include industry leaders, fintech professionals and Open Banking experts. If you’re a Mobey Forum member, don’t miss this opportunity to hear from the top voices in the industry. Register now in the Mobey Forum’s Online Member Community to secure your spot.

 

 


TBD

Our California DMV Hackathon Win: Privacy-Preserving Age Verification

Learn about our winning prototype for instant age verification within Square's Point of Sale system.

At the recent California DMV Hackathon, the Block team, represented by members from Square and TBD, won the Best Privacy & Security Design award for building a prototype of an instant age verification system. This solution utilizes mobile drivers’ licenses (mDLs) to provide secure, privacy-centric transactions for age-restricted purchases with Square’s Point of Sale (POS) system.

In this post, we’ll explore the core technical components behind our solution, which centered on using TruAge technology to enable seamless, secure age verification.

How TruAge QR Code Verification Works

At the heart of our prototype is the ability to scan and verify a TruAge Age Token QR code. These QR codes contain a verifiable credential (VC) that confirms a person’s legal age without exposing unnecessary personal information. Here’s a breakdown of how we approached verifying these credentials in our solution.

Decoding the QR Code Payload

The first step in the verification process was reading the QR code provided by the customer. TruAge QR codes follow a standard format which encodes the verifiable presentation (VP) in a compact CBOR format.

Our team implemented a scanner using our open source web5-swift SDK that reads the QR code and decodes the CBOR-encoded payload. This CBOR format is efficient, allowing the verifiable presentation to be transmitted and processed quickly, minimizing any delays at the point of sale.

Converting CBOR to JSON

Once we decoded the CBOR data, the next step was to parse it into a JSON-based verifiable presentation using the W3C Verifiable Credentials (VC) Data Model v1.1. This model is critical to ensuring interoperability across different platforms and services, as it standardizes how credentials are represented and exchanged in a decentralized manner.

Validating the Issuer’s DID

After converting the data into a verifiable format, we needed to validate the digital signature on the credential. We retrieved the issuer’s Decentralized Identifier (DID) from the TruAge server, which provided us access to a sandbox environment containing their list of authorized DIDs.

Using DIDs, we were able to validate the cryptographic signature to ensure that the credential was issued by a trusted TruAge provider. This validation step is critical for ensuring that the credential has not been tampered with and is issued by a legitimate authority.

Credential Content Verification

Once the issuer’s signature was validated, the next step was to check the contents of the verifiable credential itself. In this case, we looked for proof that the individual was over 21 and verified that the credential had not expired.

This lightweight verification process ensures that businesses can quickly and easily confirm a customer’s legal age, while protecting their privacy by not exposing sensitive information like birthdates or addresses.

Building the Integration: Web5 and TruAge Libraries

To bring this solution to life, we used a few key technologies:

iOS: Our team developed the iOS implementation using the web5-swift library, which allowed us to efficiently handle the scanning, decoding, and parsing of the TruAge QR codes on Apple devices. You can explore the code here: web5-swift TruAge Credentials.

Android: For Android, we modified the TruAge library provided by Digital Bazaar to make it compatible with our solution. This involved adapting the library for seamless integration with our QR code parsing and validation logic. The code for this can be found here: TruAge Java VC Verifier.

Privacy and Security at the Forefront

Our approach ensures that personal information is protected at every stage of the transaction. By focusing solely on verifying the specific data point needed (in this case, whether someone is over 21), we avoid collecting or storing any unnecessary information. This is a win for both businesses and consumers, as it minimizes risk while maintaining a smooth user experience.

By integrating this technology into Square’s Retail POS system, we not only enhanced security but also brought innovative, privacy-preserving solutions to small businesses that need to comply with age verification laws. This prototype has the potential to extend to many other use cases, from secure employee onboarding to identity verification for suppliers and customers.

What’s Next?

Participating in the California DMV Hackathon is just the beginning of our efforts to drive adoption of mobile drivers’ licenses and secure age verification solutions. Our work continues in collaboration with the California DMV and other industry partners as part of the NIST consortium, aimed at standardizing and scaling mDLs across the United States.

Join us on October 22 for a live Show & Tell of the prototype!

Monday, 14. October 2024

HYPR

Top Cybersecurity Regulations for Financial Services in 2024

Financial services are one of the most targeted industries in the world for cyberattacks, suffering nearly 20% of all attacks in 2023. This is understandable considering the high-value outcomes of successful attacks and the fact that, despite supposed security improvements, attacks are still relatively successful, with 84% of finance organizations hit by a cyberattack going on to experi

Financial services are one of the most targeted industries in the world for cyberattacks, suffering nearly 20% of all attacks in 2023. This is understandable considering the high-value outcomes of successful attacks and the fact that, despite supposed security improvements, attacks are still relatively successful, with 84% of finance organizations hit by a cyberattack going on to experience at least one breach.

Data breaches don't just affect the institution that's compromised but also affect confidence in the sector as a whole. The International Monetary Fund has highlighted the significant threat that weak financial services cybersecurity poses to the industry and the world. Potential outcomes range from a loss of confidence in financial services to widespread economic instability.

That's why global cybersecurity regulations have been ramped up over recent years, as they strengthen the security posture of individual firms and the industry overall. Here we'll look at the most important financial services cybersecurity regulations for 2024 and beyond.

New York — NYDFS Part 500

One of the US's most important pieces of cybersecurity legislation is the New York Department of Financial Services cybersecurity bill, technically known as 23 NYCRR Part 500. Enacted in 2017, the bill affects any firm that operates under the banking, insurance or financial services laws out of New York, which are most financial services firms in the US.

It requires firms to implement a cybersecurity policy over data governance, access controls and consumer privacy. It also obligates the introduction of more robust security methods, such as the deployment of multi-factor authentication for protecting non-public information, according to the NYDFS MFA requirements.

In November 2023, it added amendments, requiring firms to: 

implement access and privilege management  institute quarterly reporting to the board by the CISO increase the scope of incident reporting to include cybersecurity events such as ransomware  administer annual risk assessments  conduct annual cybersecurity awareness training that focuses on ransomware and social engineering   conduct vulnerability management that includes annual penetration testing 

In addition, the new amendment mandates that firms implement multi-factor authentication (MFA) for remote access and privileged accounts by November 2024. 

Upcoming Compliance Requirements 

By May 1, 2025, financial institutions must review access privileges for all users with access to sensitive information. This includes automated scans of information systems to identify vulnerabilities and manual review of systems that are not covered by automated scans. 

By November 1, 2025, organizations must develop and maintain a comprehensive asset inventory of their information systems that includes key information tracking (e.g, owner, location, etc), policies for updating the asset inventory, and the procedure for disposing of information. ​

Pro tip: Consider implementing passwordless, phishing-resistant MFA, based on FIDO standards, to ensure that only cryptographically verified identities can access sensitive financial systems and prevent phishing attacks. These technologies can help companies improve compliance with stringent and evolving regulatory requirements such as NYDFS Part 500.

US — Gramm-Leach-Bliley Act (GLBA)

The GLBA has a specific Privacy of Consumer Financial Information Rule that directly affects financial services cybersecurity. This concerns non-public personal information (NPI) that a company will collect when informing about or providing a financial product or service. Fines for non-compliance can be up to $100,000 per violation and five years in prison for complicit directors.

US — Sarbanes-Oxley (SOX)

The original Sarbanes-Oxley Act was instrumental in codifying the disclosures companies must make to current or potential investors, as well as the penalties that are due for breaches (with executives being directly on the line for up to $1 million and ten years in prison). 

It has since been updated to include cybersecurity considerations. It now obligates all publicly traded companies in the US and their wholly-owned subsidiaries to declare adherence to cybersecurity best practices in areas such as authentication and data safety. They are also required to report any data breaches publicly.

Pro tip: Ensure secure employee identity proofing during onboarding by using a combination of background checks, strong authentication that includes secure cryptographic protocols and biometric validation to comply with Know Your Employee (KYE) regulations.

US — FFIEC Standards

The Federal Financial Institutions Examination Council (FFIEC) is an interagency body that sets standards for all federally supervised financial institutions, including their subsidiaries. The FFIEC cybersecurity best practices includes guidance on effective authentication and access risk management practices. The FFIEC authentication standards emphasize multi-factor authentication (MFA) as a critical security control against financial loss and data compromise, similar to the PSD2 Strong Customer Authentication mandate.

It includes references to NIST standards SP 1800-17 and SP 800-63B, which provide implementation guidelines for passwordless MFA based on FIDO specifications. In August 2024, the FFIEC announced that it will sunset its Cybersecurity Assessment Tool on August 31, 2025, and asks financial insitutions to refer directly to relevant government resources, including the NIST Cybersecurity Framework 2.0 and the Cybersecurity and Infrastructure Security Agency’s (CISA) Cybersecurity Performance Goals. 

US — FTC Safeguards Rule

The FTC Safeguards Rule requires non-banking financial institutions, such as mortgage brokers, auto dealers, and payday lenders, to implement a comprehensive security program to keep their customers’ information safe. The FTC Safeguards Rule had several new provisions that went into effect in 2023. Among the new statutes is a mandate for multi-factor authentication for anyone accessing customer information. It should be noted that this includes MFA for desktop and server access, not just applications.

US — NIST Cybersecurity Framework 2.0

The NIST Cybersecurity Framework (NIST CSF) was originally designed as a guide for businesses of all industries and sizes to manage cybersecurity risk. The newest version, the CSF 2.0, addresses the evolution of technology towards cloud migration and SaaS by adding the function of governance and a set of searchable resources for security leaders to use to make the best decisions regarding their cybersecurity

This framework is particularly relevant for financial organizations who rely heavily on SaaS technology and cloud solutions and accounts and have a vast amount of sensitive data and information that they must protect from data breaches, cyberattacks and operational failures.  

Pro tip: Implement continuous authentication to validate user identity in real-time, ensuring security throughout the entire session. This type of adaptive authentication defends against risks related to stolen credentials and unauthorized access. 

US — Executive Order on Critical Infrastructure Cybersecurity

Enacted in 2013, the Executive Order on Critical Infrastructure Cybersecurity 13636 requires federal agencies to work together with the private sector to strengthen security in critical sectors such as water, electricity and healthcare. During the global coronavirus the financial services sector was officially classified as a critical sector as it was considered essential to maintaining the nation’s economic stability. 

Organizations are encouraged to use the NIST CSF framework to align their cybersecurity risk with a strategic plan of defense. This includes information sharing, developing incident response and recovery plans, and strengthening cybersecurity resilience through measures such as MFA and threat detection. 

The mandates for 2024 and 2025 include requiring each sector to have a specific cybersecurity plan tailored to their risk and improved intelligence and threat sharing. In addition, it tasks different federal agencies with being responsible for different critical infrastructure (e.g. the Department of Energy is responsible for the security of the U.S’s energy sector). It also requires the federal government to adopt minimum security requirements and a risk-based approach to critical infrastructure.

California — California Consumer Privacy Act (CCPA)

Introduced to help protect the privacy rights and consumer protections of Californians, the CCPA affects any company which does business with Californians and meets one of the following: 

Has a gross revenue of over $25 million Buys, sells or receives personal data on 50,000 consumers Makes over half its revenue from selling consumers' personal information 

The fines can be up to $2,500 for unintentional violations and $7,500 for intentional violations, which will be multiplied per record stolen in the case of a data breach.

EU — Payment Services Directive 2 (PSD2)

The PSD2 requirement was introduced to make it easier for financial services companies to integrate and securely share data while making payment systems safer. In addition, the law set specific technical standards for strong customer authentication and improving security measures. 

The measures affect all companies catering to consumers in the EU and any payments that start, travel through or end in the EU. This puts clear obligations on financial services cybersecurity, even for firms outside the EU.

An updated version of the framework, PSD3, is currently in review. PSD3 will introduce significant changes for banks and non-bank payment service providers (PSPs), as well as consumers. The changes include new Strong Customer Authentication (SCA) regulations, with stricter rules around data access, payment protection, and authentication of users. The final version is expected to be published late 2024 and be enforceable in 2026.

EU — NIS2 Directive

NIS2, or the Network and Information Security Directive 2, is an updated regulation from the European Union designed to strengthen cybersecurity across multiple industries. It will become law on October 17, 2024. NIS2 expands on the original NIS Directive by widening its scope and imposing stricter rules on security practices and incident reporting, with stiffer penalties for non-compliance.

Under NIS2, entities in sectors like energy, finance, transport, healthcare and manufacturing must implement strong cybersecurity protocols. These include effective risk management, strong authentication and access protocols, real-time threat monitoring, and rigorous incident reporting standards.

Importantly, the directive specifies the use of multi-factor authentication (MFA) and continuous authentication to protect network and information systems (Article 21 2(j)). NIS2 impacts not only major financial institutions, but also smaller financial entities, payment services, and digital wallets.

HYPR saves customers millions of dollars, with a 324% ROI. Read the Forrester report.

EU — Digital Operational Resilience Act (DORA) 

In response to increasing numbers of cybersecurity attacks and operational disruption after the financial crisis of 2018, the Digital Operational Resilience Act (DORA) is targeted towards increasing the resilience of the financial sector for businesses in the European Union and those dealing with EU-based customers.

It includes authentication and access control requirements for Information and Communication Technology (ICT) systems, which the financial industry in particular is increasingly relying on for the outsourcing of services that deal with sensitive data. DORA is aimed at helping to defend against the unauthorized access of malicious actors to this sensitive data that could lead to data breaches, security incidents, and operational disruptions.

EU — General Data Protection Regulation (GDPR)

All companies processing the data of European Union citizens are affected by the GDPR. The law determines how data is used and protected and governs how consent must be used for collecting it. Along with data usage, timely reporting of breaches is also obliged if it affects EU citizens.

For financial services cybersecurity, adhering to GDPR is essential. Failure to do so can lead to fines of $20 million or 4% of global revenue, with Amazon receiving the biggest fine so far of $888 million.

UK — Data Protection Act

After the UK left the EU, it kept the GDPR which it passed into law as the Data Protection Act (2018). It is roughly the same as the EU-GDPR (just amended for UK citizens) but still carries the same requirements around data safety, consent and reporting, and fines for non-compliance.

Global - Payment Card Industry Data Security Standard (PCI DSS)

The PCI DSS covers the processors of payments from major credit and debit card companies. To achieve compliance, financial services cybersecurity programs must meet several obligations, such as protecting cardholder data, encrypting data in storage and transmission, and authenticating access to all system components. Breaches of the PCI DSS may result in fines and restrictions in using major credit cards.

The latest version of PCI DSS 4.0 requires strong authentication requirements specifically related to passwords and MFA. Passwords now have stricter specifications(e.g., resetting them every 90 days) and MFA requirements have extended beyond administrators accessing the cardholder data environment (CDE) to all types of system components, including cloud, hosted systems, on-premises applications, network security devices, workstations, servers and endpoints.

Pro tip: Ensure compliance with standard 8.3.3 by using automated, high-assurance identity verification methods when resetting user credentials / authentication factors. This standard requires user identity verification before modifying authentication to prevent attacks that target this reset process.

Singapore — Monetary Authority of Singapore Notices on Cyber Hygiene

The Monetary Authority of Singapore (MAS) regulates financial institutions in the banking, capital markets, insurance and payments sectors. The MAS has issued a collection of notices on cyber hygiene, which are a set of legally binding requirements that financial institutions must take to mitigate the growing risk of cyberthreats.

The cyber hygiene notices cover six key areas, which include securing administrative account access, regular vulnerability patching and mitigation controls for systems that cannot be patched, written and regularly tested security standards, perimeter defense systems, malware protection and multi-factor authentication for any system used to access critical information.

Other — Various U.S. State Biometric Laws

Multiple U.S. states have biometric privacy laws — such as the Illinois Biometric Information Privacy Act (BIPA) — that affect any company doing business with a resident of that state. These laws regulate collection and storage of biometric information, such as face scans, fingerprints, or voiceprints. The statutes point out that biometric identifiers are different from other types of sensitive information as they are biologically unique to the individual, and cannot be changed once compromised.

Consequences of Non-Compliance with Financial Cybersecurity Regulations

When businesses fail to comply with these financial cybersecurity regulations, they are subject to monetary penalties, increased regulatory scrutiny, and a higher risk of cybersecurity incidents. For example, the fines for NYDFS non-compliance can be $250,000 a day for ongoing non-compliance. These penalties and security incidents due to non-compliance also affect customer trust and the value of the brand. In 2022, Uber’s stock went down by 5% after its third data breach in three months. 

Along with operational disruption and a loss in revenue, cybersecurity incidents may result in legal action months or even years after the incident, as in the case with the class action suit against CDK consumers from the MOVEit data breach

Achieve Regulatory Compliance with Identity Assurance

The financial services sector is at high risk of cyberattacks due to the value of successful data breaches or account takeover attacks. To combat this, state, national and supranational governments and industry groups have introduced several financial services cybersecurity regulations to ensure best practice is deployed throughout the industry. 

A common thread throughout much of the financial services cybersecurity regulations worldwide is the protection of data and stronger identity security systems. Financial services organizations globally, including two of the top four banks, rely on HYPR  to secure their systems and achieve regulatory compliance.

HYPR combines FIDO2 passwordless MFA, continuous adaptive risk response and automated identity verification to secure finance organizations while improving user experience. Learn more about HYPR’s security certifications and how our identity assurance platform helps you comply with financial cybersecurity regulations worldwide.

Key Takeaways:

Updates To Cybersecurity Regulations: Regulations are becoming more stringent across various frameworks, requiring frequent audits, vulnerability scans, and comprehensive asset inventories to improve cybersecurity and compliance. A Global Focus on Financial Cybersecurity: Regulations like GDPR, PSD2, PCI DSS 4.0, and the new EU DORA focus on data protection, strong authentication and cyber resilience. Consequences of Non-Compliance: Non-compliance with financial cybersecurity regulations can result in severe monetary penalties, reputational damage and legal action.

Datarella

Our Data Authenticity Chain

This is the third article in a series of technical posts about how Track & Trust works at a component level. The world today is full of fake news and […] The post Our Data Authenticity Chain appeared first on DATARELLA.

This is the third article in a series of technical posts about how Track & Trust works at a component level. The world today is full of fake news and dubious “facts.” Consequently, we face a significant challenge in verifying the accuracy of the data we receive. Moreover, a major part of this challenge is identifying the source of this data. We can’t predict who the end users of the Track & Trust system will be or exactly what they will want to communicate, which makes this task even more difficult. To address this issue, we must ensure that data entering our system are valid. This post explores how the “Trust” part of Track & Trust works. It explains exactly how we maintain the chain of data authenticity.

Quick navigation links to the follow-up articles will be provided at the bottom of each article once the series is complete. For now, let’s jump in.

Establishing a foundation for the data authenticity chain

We designed our system to accommodate key requirements that establish a foundation for data authenticity. Specifically, our goal was to create a flexible system. This system can work with any logistics company, regardless of their internal processes. Notably, we achieved this flexibility, which is a key benefit of Track and Trust. This allows us to collaborate with a wide range of partners. Furthermore, logistics companies can increase the number of data points they receive about their shipments from the field by using Track & Trust.

This, in turn, enables them to achieve probabilistic 360° supply chain tracking. Our team structured the Track & Trust data to integrate easily into any logistics database. In particular, we use a series of linked cryptographic signatures and blockchain transactions to create this data authenticity chain. Finally, this chain of custody has a specific purpose. It ensures that we can authenticate and validate offline events once they reach our servers.

How does the data authenticity chain work?

TLDR: We leverage APIs to take inputs from our customers (Logistics Firms) as well as to give them valuable probabilistic 360° supply chain tracking data back. For demonstration purposes we have built a front-end website to make the system tangible but the magic happens via our swagger API.

The processes surrounding our data authenticity chain are pretty technical. To make it easier to understand we’ve formated the workflow into a sequence diagram that anyone can understand.

In summary, our data authenticity chain is simply a way of validating, recording and making messy data from the field trustworthy. Once that’s accomplished leverage our blockchain toolkit to make those data immutable and highly tamper resistant. It’s a chain of custody for that data that includes built-in proof of origin. This, in turn, enables traceability and trust beyond the current state of the art.

Our next post will cover all of the ways that we can view this information. We’ll also be covering the orchestration systems operating in the background that enable us to do over the air updates to the hardware.  There will be dashboards, monitoring and CI/CD galore for your perusal.

<< Previous Post

Next Post >>

The post Our Data Authenticity Chain appeared first on DATARELLA.


KuppingerCole

Guardians Under Pressure: Mental Health in the World of Cybersecurity

by Warwick Ashford In today’s hyper-connected world, cybersecurity professionals protect organizations from increasingly complex threats. While essential for safeguarding data and digital infrastructures, this work often takes a mental toll. Pressures arise from regulatory demands, business expectations, law enforcement interactions, cybercriminals, and IT complexity. Regulatory Pressures and C

by Warwick Ashford

In today’s hyper-connected world, cybersecurity professionals protect organizations from increasingly complex threats. While essential for safeguarding data and digital infrastructures, this work often takes a mental toll. Pressures arise from regulatory demands, business expectations, law enforcement interactions, cybercriminals, and IT complexity.

Regulatory Pressures and Compliance

Compliance with regulations like GDPR, HIPAA, and PCI DSS requires constant monitoring and attention to detail. The consequences of non-compliance heighten anxiety for professionals responsible for ensuring strict adherence.

Business Demands and Pace of Work

Cybersecurity teams face constant pressure as businesses drive digital transformation. Balancing business goals with preventing vulnerabilities leads to exhaustion. The demand to "do more with less" and justify security investments adds stress, especially when prevention's value is hard to quantify.

Law Enforcement and Criminal Activity

Collaborating with law enforcement and combating cybercriminals, including organized crime and state actors, brings additional stress. Investigating breaches and countering these threats can take a psychological toll.

Technological Complexity and Uncertainty

The fast-evolving tech landscape requires continuous learning. The unpredictability of threats and managing complex systems lead to burnout and self-doubt, increasing pressure to stay ahead of attackers.

Day-to-Day Cybersecurity Operations

Cybersecurity professionals also manage daily tasks like network monitoring and incident response. The constant vigilance and high task volume often lead to cognitive overload, disrupting work-life balance and causing fatigue.

A Call to Address Mental Health

The mental health challenges facing cybersecurity professionals are significant. Organizations must address these challenges and provide support. This important issue will be discussed at KuppingerCole’s Cyberevolution 2024 conference in Frankfurt, Germany, from 3–5 December.

Addressing mental health is key to fostering a resilient workforce. Recognizing this helps protect both digital infrastructures and the professionals who defend them. Providing realistic workloads, work-life balance, and destigmatizing mental health is essential for a sustainable workforce.

At cyberevolution 2024, speakers on this topic include Sarb Sembhi, CTO at Virtually Informed; Jasmine Eskenzi, Co-Founder & CEO of The Zensory; Inge van der Beijl, Director Innovation at Northwave Investigation and Innovation; and Hermann Huber, CISO at Hubert Burda Media.

They will be addressing topics such as Cyber mindfulness: Harnessing mindfulness to combat social engineering attacks and empower the cyber workforce of the future, Cybersecurity and mental health: Navigating crisis Impact, and Stress, burnout and declining motivation in cybersecurity! There will also be a panel discussion on Addressing mental health Challenges in cybersecurity

Sunday, 13. October 2024

KuppingerCole

Going Beyond Identity: A Deep Dive into Zero Trust Security

Matthias and Alejandro discuss the concept of Zero Trust, emphasizing its importance in modern cybersecurity. They explore the core principles of Zero Trust, including continuous monitoring, data protection, and the common misconceptions surrounding it. The discussion highlights the significance of automation and orchestration in enhancing security measures and provides real-world examples of succ

Matthias and Alejandro discuss the concept of Zero Trust, emphasizing its importance in modern cybersecurity. They explore the core principles of Zero Trust, including continuous monitoring, data protection, and the common misconceptions surrounding it. The discussion highlights the significance of automation and orchestration in enhancing security measures and provides real-world examples of successful Zero Trust implementations. The conversation concludes with insights into future trends and the evolving nature of cybersecurity threats.



Friday, 11. October 2024

TBD on Dev.to

Known Customer Credential Hackathon

tbDEX is an open messaging protocol that enables liquidity seekers to connect with liquidity providers. This means that as a liquidity provider, your business can be the backend supplier in several payment applications. Performing KYC on repeat customers every time they attempt to transact with you from a different payment app would be a pain. To avoid this, you will use the Web5 SDK to is

tbDEX is an open messaging protocol that enables liquidity seekers to connect with liquidity providers. This means that as a liquidity provider, your business can be the backend supplier in several payment applications.

Performing KYC on repeat customers every time they attempt to transact with you from a different payment app would be a pain. To avoid this, you will use the Web5 SDK to issue a Known Customer Credential (KCC) to a customer, Alice, who you have already completed KYC on. You will store the JWT representing the KCC in Alice’s Decentralized Web Node so that she can present it to your business from any payment app.

Challenge Create a Decentralized Identifier (DID) and DWN to use as the Issuer. Bonus: Use the DIF community DWN instance hosted by Google Cloud.

Issue Alice a KCC that includes evidence. Note that for this challenge, you do not need to implement an actual identity verification flow.

Install the VC Protocol onto your DWN so that you can communicate with Alice’s DWN.

Obtain permission to write to Alice’s DWN by sending a GET request to:

https://vc-to-dwn.tbddev.org/authorize?issuerDid=${issuerDidUri} Store the VC JWT of the KCC as a private record in Alice’s DWN. Submit

To enter a submission for this hackathon, provide the DWN Record ID of the KCC.

Resources Alice’s DID: did:dht:rr1w5z9hdjtt76e6zmqmyyxc5cfnwjype6prz45m6z1qsbm8yjao web5/credentials SDK web5/api SDK How to create a DID and DWN with Web5.connect() Obtain Bearer DID - required to sign KCC Known Customer Credential Schema How to issue a VC with Web5 Example of issuing a KCC with Web5 Example of issued KCC How to install a DWN Protocol How to store a VC in a DWN Contact Us

If you have any questions or need any help, please reach out to us in our #kcc-hackathon channel on Discord.


Spruce Systems

Fighting Election Deepfakes with Digital Identity

Discover how digital signatures can ensure the authenticity of online announcements, helping to restore trust in a world where misinformation thrives.

One of the biggest pieces of news of the 2024 U.S. Presidential election has been the July 20th announcement by President Joe Biden, made via a letter that many saw first on social media, that he was withdrawing from the race. The immediate reaction was skepticism and disbelief – an understandable reaction in an era when it seems like more and more of what we see on the internet is fake, false, or misleading. 

The fallout of this skepticism was luckily limited. However, misinformation can have major impacts on people’s behavior, and the broader mistrust it sows can be deeply toxic for an entire society. Current attempts to deal with the problem, such as by fact-checking organizations, can’t keep up, especially as generative AI makes fakes much easier to produce.

It’s time for a different way to authenticate content online, and luckily, there’s one not too far over the horizon: digital signatures based on privacy-preserving cryptography can be used to prove the real source of online content. States, including California, are testing out a state-issued digital ID, known as the mobile driver’s license (mDL), based on these digital signatures. 

Particularly for important announcements from trusted sources, trustworthy digital signatures could have a huge positive impact on the information environment, and ultimately could help rebuild the trust that has been eroded by the online free-for-all of the past decade.

Let’s explore how that could work.

The Death of Drawn Signatures

President Biden’s withdrawal announcement was made, not in a network-televised speech, but via a letter on Biden’s letterhead. The letter was distributed to news outlets but also posted to social networks, including X (formerly known as Twitter), where many commentators saw it first. This cut out key sources of trust and vetting: the authenticity of a direct spoken statement and the third-party confirmation of a news organization.

It’s little surprise, then, when some speculated that Biden’s letter might not be real. After all, Twitter accounts can be hacked, and anyone might have created the letter. Notably, skeptics cast doubt specifically on Biden’s signature – the very tool humans have used to prove the authenticity of communications for centuries, even millennia. 

Those doubts left a gap for a fake video of Biden purportedly making the announcement. That’s just one example of the fake videos, audio, and photos we’re likely to see in the coming weeks and months, as partisans engage in boundary-breaking informational warfare. 

Disinformation has always been one of the dark arts of politics, but new generative AI tools make such fakery so easy that fact-checkers can never hope to keep up. In fact, AI and automation are also empowering “bots” on social media and across the internet, which can simulate real humans’ reactions to content, misleading some victims even more severely with false “social proof.” In one worrying recent example, Russian operatives have used AI to impersonate Americans supposedly opposed to military support for Ukraine.

With the internet increasingly the center of political discussion in America and around the world, and with the most powerful politicians in the world making major announcements via social media, we need a better way to separate the fake from the real.

The Unfakeable Proof of Digital Signatures

To understand how content could be reliably associated with a real-world identity, we have to touch on a somewhat difficult topic: cryptography.

The problem with verifying content online up to now is that the infrastructure of the internet has no built-in identity system, and any digital file can be copied. That’s why digital information systems “break” traditional forms of attestation – anyone can post any file, from any location, and claim to be anyone. Not only can you copy-paste a written signature onto any document, you can now fairly effectively fake video of someone making a statement. While dedicated digital sleuths can spot impostors in various ways, it’s very difficult for amateurs.

Reliably “signing” a digital message instead relies on encryption techniques that aren’t exactly new but are still unfamiliar—digital signatures and public-key cryptography. 

In very broad terms, online public information could be reliably signed using a digital certificate issued and affirmed by a known source – possibly a driver’s license issuer, but not exclusively, as we’ll see. That certificate would then be mathematically mixed with the digitized message content, or “hashed,” to produce a string of characters that can only be matched back to that specific content-signature pair. 

That hash file would be attached to a public post, and anyone who wanted to affirm its authenticity could check that this specific content was signed by a specific person’s certification. To draw a rather abstract metaphor, it’s like signing a document with ink that contains all the letters in the document itself – a signature unique to one piece of data.

This leaves out a lot of technical detail, but what matters is that this system can’t be spoofed or broken, except by extraordinary measures, such as physically stealing certificate-signing hardware from the DMV. In the case of our election example, the President could certify, using his mobile driver’s license or other verifiable digital ID, the content in his social media statement using a digital signature and the public would be able to trust it’s authenticity.

This type of digital signature has another advantage – you don’t actually have to reveal your identity to sign content. Digital ID systems, such as mobile driver’s licenses, have what are known as ‘selective disclosure’ features, meaning you can attest only to the specific information you want. That can include simply affirming that “a human produced this content,” without disclosing your name. Or you can show that it was made by “a human from Dallas,” without disclosing your address. 

This is important to emphasize because the idea of a digital identity can initially sound oppressive or authoritarian – and it certainly can be, if implemented using authoritarian ideals. But under the right regulatory and technology framework, they can be far more privacy-preserving than current models.

Most importantly, and in sharp contrast with the most dystopian fears, you won’t even have to depend on a government agency to attest to your identity.

This is a widely-shared vision of the digital identity future, one that aligns with the values of privacy, individual freedom, and democratic choice. At the same time, it offers a vast improvement in online trust over the current status quo. 

Over the next few weeks, Americans and many others will see yet again just how flawed our online discourse is. Being able to prove who’s talking, whether President or pauper, is an obvious starting point for fixing it.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.